Test Report: Docker_Linux_containerd 21966

                    
                      f7c9a93757611cb83a7bfb680dda9add42d627cb:2025-11-23:42464
                    
                

Test fail (4/333)

Order failed test Duration
352 TestStartStop/group/old-k8s-version/serial/DeployApp 14.57
355 TestStartStop/group/embed-certs/serial/DeployApp 12.98
356 TestStartStop/group/no-preload/serial/DeployApp 13.46
361 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 14.03
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (14.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-644335 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [37e84c8a-3caa-4e37-9815-c33d14d90a29] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [37e84c8a-3caa-4e37-9815-c33d14d90a29] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004422143s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-644335 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-644335
helpers_test.go:243: (dbg) docker inspect old-k8s-version-644335:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7f1745895cd2daf384493bf619c7d82fd6d9b63f3e54969e6aa984818e599090",
	        "Created": "2025-11-23T08:31:21.747802519Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 307672,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:31:21.797762818Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/7f1745895cd2daf384493bf619c7d82fd6d9b63f3e54969e6aa984818e599090/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7f1745895cd2daf384493bf619c7d82fd6d9b63f3e54969e6aa984818e599090/hostname",
	        "HostsPath": "/var/lib/docker/containers/7f1745895cd2daf384493bf619c7d82fd6d9b63f3e54969e6aa984818e599090/hosts",
	        "LogPath": "/var/lib/docker/containers/7f1745895cd2daf384493bf619c7d82fd6d9b63f3e54969e6aa984818e599090/7f1745895cd2daf384493bf619c7d82fd6d9b63f3e54969e6aa984818e599090-json.log",
	        "Name": "/old-k8s-version-644335",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-644335:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-644335",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7f1745895cd2daf384493bf619c7d82fd6d9b63f3e54969e6aa984818e599090",
	                "LowerDir": "/var/lib/docker/overlay2/7e9348bdbf7fe5caf38705598582fde24d8735176b764198431205917d3b77af-init/diff:/var/lib/docker/overlay2/f8ae64c4d7d1e12e69b7d69a01d34a96c2f353aeac48a9b438b028f010c32149/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7e9348bdbf7fe5caf38705598582fde24d8735176b764198431205917d3b77af/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7e9348bdbf7fe5caf38705598582fde24d8735176b764198431205917d3b77af/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7e9348bdbf7fe5caf38705598582fde24d8735176b764198431205917d3b77af/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-644335",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-644335/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-644335",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-644335",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-644335",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "dea5f6418c9754475e9b8008038381cb3d815f711e582f84688bee584969dd23",
	            "SandboxKey": "/var/run/docker/netns/dea5f6418c97",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-644335": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c091c62244399da24761260bbfae0c90eec0be75c4b7689a6c70a071c5b8f23e",
	                    "EndpointID": "ab0acae5c0b37e265556d281ecf3291a32c77d757fcd3abbe1b2799b8bbf788b",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "1a:e6:a3:19:3e:4b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-644335",
	                        "7f1745895cd2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-644335 -n old-k8s-version-644335
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-644335 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-644335 logs -n 25: (1.218516185s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-366757 sudo systemctl cat kubelet --no-pager                                                                                                                         │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                          │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                         │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /var/lib/kubelet/config.yaml                                                                                                                         │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl status docker --all --full --no-pager                                                                                                          │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl cat docker --no-pager                                                                                                                          │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/docker/daemon.json                                                                                                                              │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo docker system info                                                                                                                                       │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl status cri-docker --all --full --no-pager                                                                                                      │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl cat cri-docker --no-pager                                                                                                                      │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                 │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cri-dockerd --version                                                                                                                                    │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl status containerd --all --full --no-pager                                                                                                      │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl cat containerd --no-pager                                                                                                                      │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /lib/systemd/system/containerd.service                                                                                                               │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/containerd/config.toml                                                                                                                          │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo containerd config dump                                                                                                                                   │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl status crio --all --full --no-pager                                                                                                            │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl cat crio --no-pager                                                                                                                            │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                  │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo crio config                                                                                                                                              │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ delete  │ -p bridge-366757                                                                                                                                                               │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:32 UTC │
	│ delete  │ -p disable-driver-mounts-900754                                                                                                                                                │ disable-driver-mounts-900754 │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ start   │ -p default-k8s-diff-port-589368 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ default-k8s-diff-port-589368 │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:32:02
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:32:02.365149  326134 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:32:02.365430  326134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:32:02.365439  326134 out.go:374] Setting ErrFile to fd 2...
	I1123 08:32:02.365444  326134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:32:02.365688  326134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
	I1123 08:32:02.366222  326134 out.go:368] Setting JSON to false
	I1123 08:32:02.367361  326134 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4460,"bootTime":1763882262,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:32:02.367415  326134 start.go:143] virtualization: kvm guest
	I1123 08:32:02.369623  326134 out.go:179] * [default-k8s-diff-port-589368] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:32:02.370841  326134 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:32:02.370904  326134 notify.go:221] Checking for updates...
	I1123 08:32:02.373164  326134 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:32:02.374347  326134 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig
	I1123 08:32:02.375434  326134 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube
	I1123 08:32:02.376528  326134 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:32:02.377529  326134 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:32:02.378965  326134 config.go:182] Loaded profile config "embed-certs-329854": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:32:02.379086  326134 config.go:182] Loaded profile config "no-preload-073500": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:32:02.379154  326134 config.go:182] Loaded profile config "old-k8s-version-644335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:32:02.379256  326134 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:32:02.407081  326134 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:32:02.407244  326134 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:32:02.472040  326134 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-23 08:32:02.45999755 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:32:02.472162  326134 docker.go:319] overlay module found
	I1123 08:32:02.474068  326134 out.go:179] * Using the docker driver based on user configuration
	I1123 08:32:02.475288  326134 start.go:309] selected driver: docker
	I1123 08:32:02.475306  326134 start.go:927] validating driver "docker" against <nil>
	I1123 08:32:02.475318  326134 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:32:02.476049  326134 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:32:02.538637  326134 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-23 08:32:02.527137057 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:32:02.538955  326134 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:32:02.539261  326134 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:32:02.540977  326134 out.go:179] * Using Docker driver with root privileges
	I1123 08:32:02.542238  326134 cni.go:84] Creating CNI manager for ""
	I1123 08:32:02.542329  326134 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:32:02.542344  326134 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:32:02.542437  326134 start.go:353] cluster config:
	{Name:default-k8s-diff-port-589368 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-589368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:32:02.543888  326134 out.go:179] * Starting "default-k8s-diff-port-589368" primary control-plane node in "default-k8s-diff-port-589368" cluster
	I1123 08:32:02.544940  326134 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:32:02.546095  326134 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:32:02.547277  326134 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:32:02.547320  326134 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1123 08:32:02.547343  326134 cache.go:65] Caching tarball of preloaded images
	I1123 08:32:02.547394  326134 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:32:02.547459  326134 preload.go:238] Found /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 08:32:02.547475  326134 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 08:32:02.547640  326134 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/config.json ...
	I1123 08:32:02.547678  326134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/config.json: {Name:mk1f809d1452f95feae198ba9c84eb715cc0365a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:02.572550  326134 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:32:02.572569  326134 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:32:02.572586  326134 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:32:02.572623  326134 start.go:360] acquireMachinesLock for default-k8s-diff-port-589368: {Name:mk824e721d9528bfc83f46b2967dfcdfbed28a63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:32:02.572732  326134 start.go:364] duration metric: took 90.345µs to acquireMachinesLock for "default-k8s-diff-port-589368"
	I1123 08:32:02.572763  326134 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-589368 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-589368 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disabl
eCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:32:02.572832  326134 start.go:125] createHost starting for "" (driver="docker")
	W1123 08:32:00.803173  306530 node_ready.go:57] node "old-k8s-version-644335" has "Ready":"False" status (will retry)
	W1123 08:32:03.302835  306530 node_ready.go:57] node "old-k8s-version-644335" has "Ready":"False" status (will retry)
	I1123 08:32:02.574971  326134 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:32:02.575244  326134 start.go:159] libmachine.API.Create for "default-k8s-diff-port-589368" (driver="docker")
	I1123 08:32:02.575290  326134 client.go:173] LocalClient.Create starting
	I1123 08:32:02.575395  326134 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca.pem
	I1123 08:32:02.575452  326134 main.go:143] libmachine: Decoding PEM data...
	I1123 08:32:02.575478  326134 main.go:143] libmachine: Parsing certificate...
	I1123 08:32:02.575588  326134 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10922/.minikube/certs/cert.pem
	I1123 08:32:02.575620  326134 main.go:143] libmachine: Decoding PEM data...
	I1123 08:32:02.575639  326134 main.go:143] libmachine: Parsing certificate...
	I1123 08:32:02.576024  326134 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-589368 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:32:02.594523  326134 cli_runner.go:211] docker network inspect default-k8s-diff-port-589368 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:32:02.594623  326134 network_create.go:284] running [docker network inspect default-k8s-diff-port-589368] to gather additional debugging logs...
	I1123 08:32:02.594647  326134 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-589368
	W1123 08:32:02.612207  326134 cli_runner.go:211] docker network inspect default-k8s-diff-port-589368 returned with exit code 1
	I1123 08:32:02.612243  326134 network_create.go:287] error running [docker network inspect default-k8s-diff-port-589368]: docker network inspect default-k8s-diff-port-589368: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-589368 not found
	I1123 08:32:02.612266  326134 network_create.go:289] output of [docker network inspect default-k8s-diff-port-589368]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-589368 not found
	
	** /stderr **
	I1123 08:32:02.612490  326134 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:32:02.630896  326134 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-88eb84305350 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:b0:8c:95:93:f7} reservation:<nil>}
	I1123 08:32:02.631571  326134 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d1d9c6d8034d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:6f:a4:bc:f0:ec} reservation:<nil>}
	I1123 08:32:02.632272  326134 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9a1acaa7a50f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:4f:2c:f2:7e:e0} reservation:<nil>}
	I1123 08:32:02.632976  326134 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4bf2fad4a2d5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:eb:26:95:d5:87} reservation:<nil>}
	I1123 08:32:02.633817  326134 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f8c9f0}
	I1123 08:32:02.633845  326134 network_create.go:124] attempt to create docker network default-k8s-diff-port-589368 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 08:32:02.633926  326134 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-589368 default-k8s-diff-port-589368
	I1123 08:32:02.687114  326134 network_create.go:108] docker network default-k8s-diff-port-589368 192.168.85.0/24 created
	I1123 08:32:02.687145  326134 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-589368" container
	I1123 08:32:02.687215  326134 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:32:02.707082  326134 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-589368 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-589368 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:32:02.726188  326134 oci.go:103] Successfully created a docker volume default-k8s-diff-port-589368
	I1123 08:32:02.726303  326134 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-589368-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-589368 --entrypoint /usr/bin/test -v default-k8s-diff-port-589368:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:32:03.147122  326134 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-589368
	I1123 08:32:03.147192  326134 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:32:03.147205  326134 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:32:03.147259  326134 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-589368:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1123 08:32:05.804418  306530 node_ready.go:57] node "old-k8s-version-644335" has "Ready":"False" status (will retry)
	I1123 08:32:06.817045  306530 node_ready.go:49] node "old-k8s-version-644335" is "Ready"
	I1123 08:32:06.817079  306530 node_ready.go:38] duration metric: took 14.518019801s for node "old-k8s-version-644335" to be "Ready" ...
	I1123 08:32:06.817095  306530 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:32:06.817160  306530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:32:06.856324  306530 api_server.go:72] duration metric: took 15.083460409s to wait for apiserver process to appear ...
	I1123 08:32:06.856432  306530 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:32:06.856532  306530 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 08:32:06.980123  306530 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 08:32:06.981554  306530 api_server.go:141] control plane version: v1.28.0
	I1123 08:32:06.981588  306530 api_server.go:131] duration metric: took 125.088431ms to wait for apiserver health ...
	I1123 08:32:06.981600  306530 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:32:06.998688  306530 system_pods.go:59] 8 kube-system pods found
	I1123 08:32:06.998904  306530 system_pods.go:61] "coredns-5dd5756b68-mwh86" [fbe4548b-bcc9-427c-afbe-4a04f65d1997] Pending
	I1123 08:32:06.998916  306530 system_pods.go:61] "etcd-old-k8s-version-644335" [35aad5ae-42b1-45e7-aa99-73095dc89d5a] Running
	I1123 08:32:06.998922  306530 system_pods.go:61] "kindnet-lcz6v" [eac5ce99-6c74-46c9-a0c0-a595c22303e4] Running
	I1123 08:32:06.998927  306530 system_pods.go:61] "kube-apiserver-old-k8s-version-644335" [9ae9ecef-e1d0-4002-832e-4f0ef5a9645b] Running
	I1123 08:32:06.998933  306530 system_pods.go:61] "kube-controller-manager-old-k8s-version-644335" [0ff36cb3-558f-49d7-bf48-a5645f4e575f] Running
	I1123 08:32:06.998937  306530 system_pods.go:61] "kube-proxy-fjlft" [43a841de-4dd0-46a2-aae4-901399aa0515] Running
	I1123 08:32:06.998943  306530 system_pods.go:61] "kube-scheduler-old-k8s-version-644335" [ae490e16-ca14-4c51-a024-be0735700ea6] Running
	I1123 08:32:06.998947  306530 system_pods.go:61] "storage-provisioner" [8bbfd059-0548-413b-bc78-b5b6446505a3] Pending
	I1123 08:32:06.998954  306530 system_pods.go:74] duration metric: took 17.347129ms to wait for pod list to return data ...
	I1123 08:32:06.998974  306530 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:32:07.226094  306530 default_sa.go:45] found service account: "default"
	I1123 08:32:07.226239  306530 default_sa.go:55] duration metric: took 227.257372ms for default service account to be created ...
	I1123 08:32:07.226264  306530 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:32:07.246837  306530 system_pods.go:86] 8 kube-system pods found
	I1123 08:32:07.246885  306530 system_pods.go:89] "coredns-5dd5756b68-mwh86" [fbe4548b-bcc9-427c-afbe-4a04f65d1997] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:32:07.246895  306530 system_pods.go:89] "etcd-old-k8s-version-644335" [35aad5ae-42b1-45e7-aa99-73095dc89d5a] Running
	I1123 08:32:07.246902  306530 system_pods.go:89] "kindnet-lcz6v" [eac5ce99-6c74-46c9-a0c0-a595c22303e4] Running
	I1123 08:32:07.246908  306530 system_pods.go:89] "kube-apiserver-old-k8s-version-644335" [9ae9ecef-e1d0-4002-832e-4f0ef5a9645b] Running
	I1123 08:32:07.246914  306530 system_pods.go:89] "kube-controller-manager-old-k8s-version-644335" [0ff36cb3-558f-49d7-bf48-a5645f4e575f] Running
	I1123 08:32:07.246919  306530 system_pods.go:89] "kube-proxy-fjlft" [43a841de-4dd0-46a2-aae4-901399aa0515] Running
	I1123 08:32:07.246924  306530 system_pods.go:89] "kube-scheduler-old-k8s-version-644335" [ae490e16-ca14-4c51-a024-be0735700ea6] Running
	I1123 08:32:07.246928  306530 system_pods.go:89] "storage-provisioner" [8bbfd059-0548-413b-bc78-b5b6446505a3] Pending
	I1123 08:32:07.246952  306530 retry.go:31] will retry after 228.830225ms: missing components: kube-dns
	I1123 08:32:07.657267  306530 system_pods.go:86] 8 kube-system pods found
	I1123 08:32:07.657320  306530 system_pods.go:89] "coredns-5dd5756b68-mwh86" [fbe4548b-bcc9-427c-afbe-4a04f65d1997] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:32:07.657329  306530 system_pods.go:89] "etcd-old-k8s-version-644335" [35aad5ae-42b1-45e7-aa99-73095dc89d5a] Running
	I1123 08:32:07.657338  306530 system_pods.go:89] "kindnet-lcz6v" [eac5ce99-6c74-46c9-a0c0-a595c22303e4] Running
	I1123 08:32:07.657344  306530 system_pods.go:89] "kube-apiserver-old-k8s-version-644335" [9ae9ecef-e1d0-4002-832e-4f0ef5a9645b] Running
	I1123 08:32:07.657355  306530 system_pods.go:89] "kube-controller-manager-old-k8s-version-644335" [0ff36cb3-558f-49d7-bf48-a5645f4e575f] Running
	I1123 08:32:07.657359  306530 system_pods.go:89] "kube-proxy-fjlft" [43a841de-4dd0-46a2-aae4-901399aa0515] Running
	I1123 08:32:07.657364  306530 system_pods.go:89] "kube-scheduler-old-k8s-version-644335" [ae490e16-ca14-4c51-a024-be0735700ea6] Running
	I1123 08:32:07.657375  306530 system_pods.go:89] "storage-provisioner" [8bbfd059-0548-413b-bc78-b5b6446505a3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:32:07.657393  306530 retry.go:31] will retry after 330.888352ms: missing components: kube-dns
	I1123 08:32:07.993569  306530 system_pods.go:86] 8 kube-system pods found
	I1123 08:32:07.993616  306530 system_pods.go:89] "coredns-5dd5756b68-mwh86" [fbe4548b-bcc9-427c-afbe-4a04f65d1997] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:32:07.993626  306530 system_pods.go:89] "etcd-old-k8s-version-644335" [35aad5ae-42b1-45e7-aa99-73095dc89d5a] Running
	I1123 08:32:07.993632  306530 system_pods.go:89] "kindnet-lcz6v" [eac5ce99-6c74-46c9-a0c0-a595c22303e4] Running
	I1123 08:32:07.993638  306530 system_pods.go:89] "kube-apiserver-old-k8s-version-644335" [9ae9ecef-e1d0-4002-832e-4f0ef5a9645b] Running
	I1123 08:32:07.993645  306530 system_pods.go:89] "kube-controller-manager-old-k8s-version-644335" [0ff36cb3-558f-49d7-bf48-a5645f4e575f] Running
	I1123 08:32:07.993650  306530 system_pods.go:89] "kube-proxy-fjlft" [43a841de-4dd0-46a2-aae4-901399aa0515] Running
	I1123 08:32:07.993658  306530 system_pods.go:89] "kube-scheduler-old-k8s-version-644335" [ae490e16-ca14-4c51-a024-be0735700ea6] Running
	I1123 08:32:07.993666  306530 system_pods.go:89] "storage-provisioner" [8bbfd059-0548-413b-bc78-b5b6446505a3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:32:07.993685  306530 retry.go:31] will retry after 309.599463ms: missing components: kube-dns
	I1123 08:32:08.311383  306530 system_pods.go:86] 8 kube-system pods found
	I1123 08:32:08.311430  306530 system_pods.go:89] "coredns-5dd5756b68-mwh86" [fbe4548b-bcc9-427c-afbe-4a04f65d1997] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:32:08.311439  306530 system_pods.go:89] "etcd-old-k8s-version-644335" [35aad5ae-42b1-45e7-aa99-73095dc89d5a] Running
	I1123 08:32:08.311447  306530 system_pods.go:89] "kindnet-lcz6v" [eac5ce99-6c74-46c9-a0c0-a595c22303e4] Running
	I1123 08:32:08.311452  306530 system_pods.go:89] "kube-apiserver-old-k8s-version-644335" [9ae9ecef-e1d0-4002-832e-4f0ef5a9645b] Running
	I1123 08:32:08.311459  306530 system_pods.go:89] "kube-controller-manager-old-k8s-version-644335" [0ff36cb3-558f-49d7-bf48-a5645f4e575f] Running
	I1123 08:32:08.311464  306530 system_pods.go:89] "kube-proxy-fjlft" [43a841de-4dd0-46a2-aae4-901399aa0515] Running
	I1123 08:32:08.311469  306530 system_pods.go:89] "kube-scheduler-old-k8s-version-644335" [ae490e16-ca14-4c51-a024-be0735700ea6] Running
	I1123 08:32:08.311476  306530 system_pods.go:89] "storage-provisioner" [8bbfd059-0548-413b-bc78-b5b6446505a3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:32:08.311495  306530 retry.go:31] will retry after 394.800609ms: missing components: kube-dns
	I1123 08:32:08.713374  306530 system_pods.go:86] 8 kube-system pods found
	I1123 08:32:08.713421  306530 system_pods.go:89] "coredns-5dd5756b68-mwh86" [fbe4548b-bcc9-427c-afbe-4a04f65d1997] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:32:08.713431  306530 system_pods.go:89] "etcd-old-k8s-version-644335" [35aad5ae-42b1-45e7-aa99-73095dc89d5a] Running
	I1123 08:32:08.713437  306530 system_pods.go:89] "kindnet-lcz6v" [eac5ce99-6c74-46c9-a0c0-a595c22303e4] Running
	I1123 08:32:08.713443  306530 system_pods.go:89] "kube-apiserver-old-k8s-version-644335" [9ae9ecef-e1d0-4002-832e-4f0ef5a9645b] Running
	I1123 08:32:08.713449  306530 system_pods.go:89] "kube-controller-manager-old-k8s-version-644335" [0ff36cb3-558f-49d7-bf48-a5645f4e575f] Running
	I1123 08:32:08.713454  306530 system_pods.go:89] "kube-proxy-fjlft" [43a841de-4dd0-46a2-aae4-901399aa0515] Running
	I1123 08:32:08.713459  306530 system_pods.go:89] "kube-scheduler-old-k8s-version-644335" [ae490e16-ca14-4c51-a024-be0735700ea6] Running
	I1123 08:32:08.713467  306530 system_pods.go:89] "storage-provisioner" [8bbfd059-0548-413b-bc78-b5b6446505a3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:32:08.713486  306530 retry.go:31] will retry after 626.925144ms: missing components: kube-dns
	I1123 08:32:09.345080  306530 system_pods.go:86] 8 kube-system pods found
	I1123 08:32:09.345107  306530 system_pods.go:89] "coredns-5dd5756b68-mwh86" [fbe4548b-bcc9-427c-afbe-4a04f65d1997] Running
	I1123 08:32:09.345113  306530 system_pods.go:89] "etcd-old-k8s-version-644335" [35aad5ae-42b1-45e7-aa99-73095dc89d5a] Running
	I1123 08:32:09.345116  306530 system_pods.go:89] "kindnet-lcz6v" [eac5ce99-6c74-46c9-a0c0-a595c22303e4] Running
	I1123 08:32:09.345120  306530 system_pods.go:89] "kube-apiserver-old-k8s-version-644335" [9ae9ecef-e1d0-4002-832e-4f0ef5a9645b] Running
	I1123 08:32:09.345124  306530 system_pods.go:89] "kube-controller-manager-old-k8s-version-644335" [0ff36cb3-558f-49d7-bf48-a5645f4e575f] Running
	I1123 08:32:09.345127  306530 system_pods.go:89] "kube-proxy-fjlft" [43a841de-4dd0-46a2-aae4-901399aa0515] Running
	I1123 08:32:09.345132  306530 system_pods.go:89] "kube-scheduler-old-k8s-version-644335" [ae490e16-ca14-4c51-a024-be0735700ea6] Running
	I1123 08:32:09.345135  306530 system_pods.go:89] "storage-provisioner" [8bbfd059-0548-413b-bc78-b5b6446505a3] Running
	I1123 08:32:09.345143  306530 system_pods.go:126] duration metric: took 2.118863007s to wait for k8s-apps to be running ...
	I1123 08:32:09.345152  306530 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:32:09.345191  306530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:32:09.358626  306530 system_svc.go:56] duration metric: took 13.466479ms WaitForService to wait for kubelet
	I1123 08:32:09.358657  306530 kubeadm.go:587] duration metric: took 17.585799049s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:32:09.358677  306530 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:32:09.361987  306530 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:32:09.362020  306530 node_conditions.go:123] node cpu capacity is 8
	I1123 08:32:09.362035  306530 node_conditions.go:105] duration metric: took 3.353601ms to run NodePressure ...
	I1123 08:32:09.362047  306530 start.go:242] waiting for startup goroutines ...
	I1123 08:32:09.362054  306530 start.go:247] waiting for cluster config update ...
	I1123 08:32:09.362064  306530 start.go:256] writing updated cluster config ...
	I1123 08:32:09.362362  306530 ssh_runner.go:195] Run: rm -f paused
	I1123 08:32:09.366401  306530 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:32:09.371069  306530 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-mwh86" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:09.376095  306530 pod_ready.go:94] pod "coredns-5dd5756b68-mwh86" is "Ready"
	I1123 08:32:09.376151  306530 pod_ready.go:86] duration metric: took 5.053235ms for pod "coredns-5dd5756b68-mwh86" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:09.378964  306530 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-644335" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:09.383429  306530 pod_ready.go:94] pod "etcd-old-k8s-version-644335" is "Ready"
	I1123 08:32:09.383449  306530 pod_ready.go:86] duration metric: took 4.460707ms for pod "etcd-old-k8s-version-644335" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:09.386372  306530 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-644335" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:09.390805  306530 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-644335" is "Ready"
	I1123 08:32:09.390831  306530 pod_ready.go:86] duration metric: took 4.429668ms for pod "kube-apiserver-old-k8s-version-644335" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:09.393674  306530 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-644335" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:09.771706  306530 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-644335" is "Ready"
	I1123 08:32:09.771737  306530 pod_ready.go:86] duration metric: took 378.038443ms for pod "kube-controller-manager-old-k8s-version-644335" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:09.972722  306530 pod_ready.go:83] waiting for pod "kube-proxy-fjlft" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:10.370981  306530 pod_ready.go:94] pod "kube-proxy-fjlft" is "Ready"
	I1123 08:32:10.371006  306530 pod_ready.go:86] duration metric: took 398.257667ms for pod "kube-proxy-fjlft" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:10.514640  314870 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:32:10.514754  314870 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:32:10.514892  314870 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:32:10.514978  314870 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 08:32:10.515028  314870 kubeadm.go:319] OS: Linux
	I1123 08:32:10.515136  314870 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:32:10.515203  314870 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:32:10.515270  314870 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:32:10.515345  314870 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:32:10.515430  314870 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:32:10.515522  314870 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:32:10.515590  314870 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:32:10.515669  314870 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 08:32:10.515775  314870 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:32:10.515911  314870 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:32:10.516048  314870 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:32:10.516136  314870 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:32:10.517774  314870 out.go:252]   - Generating certificates and keys ...
	I1123 08:32:10.517847  314870 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:32:10.517953  314870 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:32:10.518052  314870 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:32:10.518126  314870 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:32:10.518203  314870 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:32:10.518277  314870 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:32:10.518349  314870 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:32:10.518490  314870 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-073500] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 08:32:10.518572  314870 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:32:10.518711  314870 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-073500] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 08:32:10.518798  314870 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:32:10.518889  314870 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:32:10.518956  314870 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:32:10.519026  314870 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:32:10.519092  314870 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:32:10.519170  314870 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:32:10.519256  314870 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:32:10.519376  314870 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:32:10.519464  314870 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:32:10.519584  314870 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:32:10.519663  314870 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:32:10.521011  314870 out.go:252]   - Booting up control plane ...
	I1123 08:32:10.521091  314870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:32:10.521156  314870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:32:10.521244  314870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:32:10.521374  314870 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:32:10.521480  314870 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:32:10.521611  314870 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:32:10.521695  314870 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:32:10.521736  314870 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:32:10.521854  314870 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:32:10.522003  314870 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:32:10.522095  314870 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001353803s
	I1123 08:32:10.522185  314870 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:32:10.522259  314870 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1123 08:32:10.522341  314870 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:32:10.522412  314870 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:32:10.522480  314870 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.699829875s
	I1123 08:32:10.522563  314870 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.799265887s
	I1123 08:32:10.522638  314870 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.002757713s
	I1123 08:32:10.522745  314870 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:32:10.522843  314870 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:32:10.522890  314870 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:32:10.523072  314870 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-073500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:32:10.523130  314870 kubeadm.go:319] [bootstrap-token] Using token: xbt0ca.7qstjhvsu0orvs9m
	I1123 08:32:10.524469  314870 out.go:252]   - Configuring RBAC rules ...
	I1123 08:32:10.524592  314870 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:32:10.524669  314870 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:32:10.524799  314870 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:32:10.524929  314870 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:32:10.525032  314870 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:32:10.525108  314870 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:32:10.525210  314870 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:32:10.525263  314870 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:32:10.525303  314870 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:32:10.525309  314870 kubeadm.go:319] 
	I1123 08:32:10.525395  314870 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:32:10.525405  314870 kubeadm.go:319] 
	I1123 08:32:10.525484  314870 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:32:10.525494  314870 kubeadm.go:319] 
	I1123 08:32:10.525553  314870 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:32:10.525650  314870 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:32:10.525770  314870 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:32:10.525784  314870 kubeadm.go:319] 
	I1123 08:32:10.525865  314870 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:32:10.525874  314870 kubeadm.go:319] 
	I1123 08:32:10.525921  314870 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:32:10.525933  314870 kubeadm.go:319] 
	I1123 08:32:10.525982  314870 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:32:10.526060  314870 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:32:10.526157  314870 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:32:10.526174  314870 kubeadm.go:319] 
	I1123 08:32:10.526255  314870 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:32:10.526369  314870 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:32:10.526377  314870 kubeadm.go:319] 
	I1123 08:32:10.526463  314870 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xbt0ca.7qstjhvsu0orvs9m \
	I1123 08:32:10.526577  314870 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:54663b9a07b99dc9bb266865529fcac752142d486218fea7481ff08893e16d79 \
	I1123 08:32:10.526598  314870 kubeadm.go:319] 	--control-plane 
	I1123 08:32:10.526614  314870 kubeadm.go:319] 
	I1123 08:32:10.526738  314870 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:32:10.526747  314870 kubeadm.go:319] 
	I1123 08:32:10.526858  314870 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xbt0ca.7qstjhvsu0orvs9m \
	I1123 08:32:10.527006  314870 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:54663b9a07b99dc9bb266865529fcac752142d486218fea7481ff08893e16d79 
	I1123 08:32:10.527037  314870 cni.go:84] Creating CNI manager for ""
	I1123 08:32:10.527048  314870 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:32:10.528491  314870 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:32:10.661882  318549 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:32:10.661958  318549 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:32:10.662066  318549 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:32:10.662134  318549 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 08:32:10.662176  318549 kubeadm.go:319] OS: Linux
	I1123 08:32:10.662232  318549 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:32:10.662287  318549 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:32:10.662345  318549 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:32:10.662402  318549 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:32:10.662459  318549 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:32:10.662604  318549 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:32:10.662668  318549 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:32:10.662736  318549 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 08:32:10.662825  318549 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:32:10.662948  318549 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:32:10.663057  318549 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:32:10.663137  318549 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:32:10.668048  318549 out.go:252]   - Generating certificates and keys ...
	I1123 08:32:10.668161  318549 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:32:10.668265  318549 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:32:10.668376  318549 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:32:10.668466  318549 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:32:10.668599  318549 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:32:10.668690  318549 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:32:10.668764  318549 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:32:10.668934  318549 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-329854 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:32:10.669038  318549 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:32:10.669238  318549 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-329854 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:32:10.669308  318549 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:32:10.669362  318549 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:32:10.669428  318549 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:32:10.669526  318549 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:32:10.669617  318549 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:32:10.669694  318549 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:32:10.669801  318549 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:32:10.669902  318549 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:32:10.670012  318549 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:32:10.670109  318549 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:32:10.670186  318549 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:32:10.671689  318549 out.go:252]   - Booting up control plane ...
	I1123 08:32:10.671802  318549 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:32:10.671886  318549 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:32:10.671975  318549 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:32:10.672136  318549 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:32:10.672270  318549 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:32:10.672398  318549 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:32:10.672485  318549 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:32:10.672541  318549 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:32:10.672685  318549 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:32:10.672807  318549 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:32:10.672896  318549 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.51456ms
	I1123 08:32:10.673028  318549 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:32:10.673147  318549 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 08:32:10.673274  318549 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:32:10.673393  318549 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:32:10.673520  318549 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.640714346s
	I1123 08:32:10.673611  318549 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.942673779s
	I1123 08:32:10.673699  318549 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001521455s
	I1123 08:32:10.673831  318549 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:32:10.674022  318549 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:32:10.674075  318549 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:32:10.674431  318549 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-329854 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:32:10.674559  318549 kubeadm.go:319] [bootstrap-token] Using token: k7cb1u.d38rcugcduwh7x1h
	I1123 08:32:10.677042  318549 out.go:252]   - Configuring RBAC rules ...
	I1123 08:32:10.677186  318549 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:32:10.677286  318549 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:32:10.677457  318549 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:32:10.677653  318549 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:32:10.677835  318549 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:32:10.677966  318549 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:32:10.678148  318549 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:32:10.678214  318549 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:32:10.678277  318549 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:32:10.678291  318549 kubeadm.go:319] 
	I1123 08:32:10.678372  318549 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:32:10.678383  318549 kubeadm.go:319] 
	I1123 08:32:10.678499  318549 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:32:10.678523  318549 kubeadm.go:319] 
	I1123 08:32:10.678562  318549 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:32:10.678627  318549 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:32:10.678691  318549 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:32:10.678699  318549 kubeadm.go:319] 
	I1123 08:32:10.678782  318549 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:32:10.678793  318549 kubeadm.go:319] 
	I1123 08:32:10.678855  318549 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:32:10.678864  318549 kubeadm.go:319] 
	I1123 08:32:10.678952  318549 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:32:10.679062  318549 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:32:10.679153  318549 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:32:10.679161  318549 kubeadm.go:319] 
	I1123 08:32:10.679258  318549 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:32:10.679363  318549 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:32:10.679372  318549 kubeadm.go:319] 
	I1123 08:32:10.679480  318549 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token k7cb1u.d38rcugcduwh7x1h \
	I1123 08:32:10.679652  318549 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:54663b9a07b99dc9bb266865529fcac752142d486218fea7481ff08893e16d79 \
	I1123 08:32:10.679685  318549 kubeadm.go:319] 	--control-plane 
	I1123 08:32:10.679690  318549 kubeadm.go:319] 
	I1123 08:32:10.679806  318549 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:32:10.679822  318549 kubeadm.go:319] 
	I1123 08:32:10.679966  318549 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token k7cb1u.d38rcugcduwh7x1h \
	I1123 08:32:10.680132  318549 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:54663b9a07b99dc9bb266865529fcac752142d486218fea7481ff08893e16d79 
	I1123 08:32:10.680155  318549 cni.go:84] Creating CNI manager for ""
	I1123 08:32:10.680163  318549 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:32:10.681571  318549 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:32:10.571886  306530 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-644335" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:10.971385  306530 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-644335" is "Ready"
	I1123 08:32:10.971415  306530 pod_ready.go:86] duration metric: took 399.502745ms for pod "kube-scheduler-old-k8s-version-644335" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:10.971430  306530 pod_ready.go:40] duration metric: took 1.604993054s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:32:11.033661  306530 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1123 08:32:11.035051  306530 out.go:203] 
	W1123 08:32:11.036353  306530 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 08:32:11.039086  306530 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 08:32:11.040450  306530 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-644335" cluster and "default" namespace by default
	I1123 08:32:10.529719  314870 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:32:10.534944  314870 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:32:10.534964  314870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:32:10.549433  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:32:10.799697  314870 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:32:10.799770  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:10.799853  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-073500 minikube.k8s.io/updated_at=2025_11_23T08_32_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=no-preload-073500 minikube.k8s.io/primary=true
	I1123 08:32:10.898855  314870 ops.go:34] apiserver oom_adj: -16
	I1123 08:32:10.899004  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:11.399695  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:11.899300  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:08.170638  326134 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-589368:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.023218769s)
	I1123 08:32:08.170700  326134 kic.go:203] duration metric: took 5.023489314s to extract preloaded images to volume ...
	W1123 08:32:08.170828  326134 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 08:32:08.170891  326134 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 08:32:08.170957  326134 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:32:08.263601  326134 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-589368 --name default-k8s-diff-port-589368 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-589368 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-589368 --network default-k8s-diff-port-589368 --ip 192.168.85.2 --volume default-k8s-diff-port-589368:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:32:08.663695  326134 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-589368 --format={{.State.Running}}
	I1123 08:32:08.688914  326134 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-589368 --format={{.State.Status}}
	I1123 08:32:08.716117  326134 cli_runner.go:164] Run: docker exec default-k8s-diff-port-589368 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:32:08.771536  326134 oci.go:144] the created container "default-k8s-diff-port-589368" has a running status.
	I1123 08:32:08.771571  326134 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-10922/.minikube/machines/default-k8s-diff-port-589368/id_rsa...
	I1123 08:32:08.842617  326134 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-10922/.minikube/machines/default-k8s-diff-port-589368/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:32:08.874417  326134 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-589368 --format={{.State.Status}}
	I1123 08:32:08.899344  326134 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:32:08.899370  326134 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-589368 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:32:08.979387  326134 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-589368 --format={{.State.Status}}
	I1123 08:32:09.003807  326134 machine.go:94] provisionDockerMachine start ...
	I1123 08:32:09.003983  326134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-589368
	I1123 08:32:09.033752  326134 main.go:143] libmachine: Using SSH client type: native
	I1123 08:32:09.034122  326134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 08:32:09.034138  326134 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:32:09.035254  326134 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48000->127.0.0.1:33108: read: connection reset by peer
	I1123 08:32:12.184579  326134 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-589368
	
	I1123 08:32:12.184605  326134 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-589368"
	I1123 08:32:12.184787  326134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-589368
	I1123 08:32:12.204947  326134 main.go:143] libmachine: Using SSH client type: native
	I1123 08:32:12.205165  326134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 08:32:12.205178  326134 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-589368 && echo "default-k8s-diff-port-589368" | sudo tee /etc/hostname
	I1123 08:32:12.364428  326134 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-589368
	
	I1123 08:32:12.364496  326134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-589368
	I1123 08:32:10.682639  318549 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:32:10.687459  318549 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:32:10.687477  318549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:32:10.701352  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:32:10.977767  318549 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:32:10.977927  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-329854 minikube.k8s.io/updated_at=2025_11_23T08_32_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=embed-certs-329854 minikube.k8s.io/primary=true
	I1123 08:32:10.978075  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:10.993796  318549 ops.go:34] apiserver oom_adj: -16
	I1123 08:32:11.078883  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:11.579842  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:12.079241  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:12.579599  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:13.079939  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:13.579129  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:12.384269  326134 main.go:143] libmachine: Using SSH client type: native
	I1123 08:32:12.384542  326134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 08:32:12.384562  326134 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-589368' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-589368/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-589368' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:32:12.531200  326134 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:32:12.531235  326134 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10922/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10922/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10922/.minikube}
	I1123 08:32:12.531271  326134 ubuntu.go:190] setting up certificates
	I1123 08:32:12.531290  326134 provision.go:84] configureAuth start
	I1123 08:32:12.531361  326134 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-589368
	I1123 08:32:12.549866  326134 provision.go:143] copyHostCerts
	I1123 08:32:12.549946  326134 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10922/.minikube/ca.pem, removing ...
	I1123 08:32:12.549963  326134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10922/.minikube/ca.pem
	I1123 08:32:12.550040  326134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10922/.minikube/ca.pem (1078 bytes)
	I1123 08:32:12.550152  326134 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10922/.minikube/cert.pem, removing ...
	I1123 08:32:12.550166  326134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10922/.minikube/cert.pem
	I1123 08:32:12.550224  326134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10922/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10922/.minikube/cert.pem (1123 bytes)
	I1123 08:32:12.550308  326134 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10922/.minikube/key.pem, removing ...
	I1123 08:32:12.550319  326134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10922/.minikube/key.pem
	I1123 08:32:12.550356  326134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10922/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10922/.minikube/key.pem (1675 bytes)
	I1123 08:32:12.550440  326134 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10922/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-589368 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-589368 localhost minikube]
	I1123 08:32:12.630027  326134 provision.go:177] copyRemoteCerts
	I1123 08:32:12.630087  326134 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:32:12.630122  326134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-589368
	I1123 08:32:12.651460  326134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/default-k8s-diff-port-589368/id_rsa Username:docker}
	I1123 08:32:12.754829  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:32:12.774851  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 08:32:12.793762  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:32:12.812458  326134 provision.go:87] duration metric: took 281.153863ms to configureAuth
	I1123 08:32:12.812486  326134 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:32:12.812713  326134 config.go:182] Loaded profile config "default-k8s-diff-port-589368": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:32:12.812728  326134 machine.go:97] duration metric: took 3.808874609s to provisionDockerMachine
	I1123 08:32:12.812737  326134 client.go:176] duration metric: took 10.237434724s to LocalClient.Create
	I1123 08:32:12.812760  326134 start.go:167] duration metric: took 10.237519395s to libmachine.API.Create "default-k8s-diff-port-589368"
	I1123 08:32:12.812772  326134 start.go:293] postStartSetup for "default-k8s-diff-port-589368" (driver="docker")
	I1123 08:32:12.812783  326134 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:32:12.812843  326134 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:32:12.812958  326134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-589368
	I1123 08:32:12.832166  326134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/default-k8s-diff-port-589368/id_rsa Username:docker}
	I1123 08:32:12.937078  326134 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:32:12.941011  326134 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:32:12.941047  326134 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:32:12.941061  326134 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10922/.minikube/addons for local assets ...
	I1123 08:32:12.941116  326134 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10922/.minikube/files for local assets ...
	I1123 08:32:12.941234  326134 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-10922/.minikube/files/etc/ssl/certs/144792.pem -> 144792.pem in /etc/ssl/certs
	I1123 08:32:12.941372  326134 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:32:12.950015  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/files/etc/ssl/certs/144792.pem --> /etc/ssl/certs/144792.pem (1708 bytes)
	I1123 08:32:12.972737  326134 start.go:296] duration metric: took 159.951896ms for postStartSetup
	I1123 08:32:12.973091  326134 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-589368
	I1123 08:32:12.991848  326134 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/config.json ...
	I1123 08:32:12.992096  326134 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:32:12.992133  326134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-589368
	I1123 08:32:13.010214  326134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/default-k8s-diff-port-589368/id_rsa Username:docker}
	I1123 08:32:13.111184  326134 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:32:13.117057  326134 start.go:128] duration metric: took 10.544210299s to createHost
	I1123 08:32:13.117083  326134 start.go:83] releasing machines lock for "default-k8s-diff-port-589368", held for 10.544335962s
	I1123 08:32:13.117159  326134 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-589368
	I1123 08:32:13.138821  326134 ssh_runner.go:195] Run: cat /version.json
	I1123 08:32:13.138876  326134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-589368
	I1123 08:32:13.138898  326134 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:32:13.139000  326134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-589368
	I1123 08:32:13.159900  326134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/default-k8s-diff-port-589368/id_rsa Username:docker}
	I1123 08:32:13.160675  326134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/default-k8s-diff-port-589368/id_rsa Username:docker}
	I1123 08:32:13.319429  326134 ssh_runner.go:195] Run: systemctl --version
	I1123 08:32:13.326550  326134 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:32:13.331385  326134 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:32:13.331460  326134 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:32:13.358246  326134 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 08:32:13.358269  326134 start.go:496] detecting cgroup driver to use...
	I1123 08:32:13.358306  326134 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:32:13.358360  326134 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:32:13.375851  326134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:32:13.388488  326134 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:32:13.388557  326134 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:32:13.405861  326134 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:32:13.426538  326134 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:32:13.517053  326134 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:32:13.614675  326134 docker.go:234] disabling docker service ...
	I1123 08:32:13.614753  326134 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:32:13.636054  326134 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:32:13.650186  326134 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:32:13.740250  326134 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:32:13.843726  326134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:32:13.858236  326134 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:32:13.875875  326134 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 08:32:13.887815  326134 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:32:13.899519  326134 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1123 08:32:13.899579  326134 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1123 08:32:13.910780  326134 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:32:13.923659  326134 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:32:13.936812  326134 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:32:13.947028  326134 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:32:13.955978  326134 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:32:13.967808  326134 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:32:13.979257  326134 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:32:13.989005  326134 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:32:13.997178  326134 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:32:14.005215  326134 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:32:14.085715  326134 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:32:14.211871  326134 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:32:14.211935  326134 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:32:14.216762  326134 start.go:564] Will wait 60s for crictl version
	I1123 08:32:14.216825  326134 ssh_runner.go:195] Run: which crictl
	I1123 08:32:14.221323  326134 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:32:14.248285  326134 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:32:14.248347  326134 ssh_runner.go:195] Run: containerd --version
	I1123 08:32:14.272256  326134 ssh_runner.go:195] Run: containerd --version
	I1123 08:32:14.296476  326134 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 08:32:14.079520  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:14.579711  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:15.079695  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:15.153588  318549 kubeadm.go:1114] duration metric: took 4.175594558s to wait for elevateKubeSystemPrivileges
	I1123 08:32:15.153626  318549 kubeadm.go:403] duration metric: took 16.859259885s to StartCluster
	I1123 08:32:15.153647  318549 settings.go:142] acquiring lock: {Name:mk436e1608db541c991c29c7031bb6bf416025bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:15.153728  318549 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-10922/kubeconfig
	I1123 08:32:15.155269  318549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10922/kubeconfig: {Name:mk728060aa1e1ef3d8ab678673d9cf01ff53b55e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:15.155565  318549 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:32:15.155691  318549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:32:15.155714  318549 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:32:15.155816  318549 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-329854"
	I1123 08:32:15.155840  318549 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-329854"
	I1123 08:32:15.155876  318549 host.go:66] Checking if "embed-certs-329854" exists ...
	I1123 08:32:15.155920  318549 config.go:182] Loaded profile config "embed-certs-329854": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:32:15.155982  318549 addons.go:70] Setting default-storageclass=true in profile "embed-certs-329854"
	I1123 08:32:15.156001  318549 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-329854"
	I1123 08:32:15.156259  318549 cli_runner.go:164] Run: docker container inspect embed-certs-329854 --format={{.State.Status}}
	I1123 08:32:15.156418  318549 cli_runner.go:164] Run: docker container inspect embed-certs-329854 --format={{.State.Status}}
	I1123 08:32:15.161020  318549 out.go:179] * Verifying Kubernetes components...
	I1123 08:32:15.162691  318549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:32:15.183365  318549 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:32:15.184604  318549 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:32:15.184635  318549 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:32:15.184694  318549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-329854
	I1123 08:32:15.184807  318549 addons.go:239] Setting addon default-storageclass=true in "embed-certs-329854"
	I1123 08:32:15.184853  318549 host.go:66] Checking if "embed-certs-329854" exists ...
	I1123 08:32:15.185323  318549 cli_runner.go:164] Run: docker container inspect embed-certs-329854 --format={{.State.Status}}
	I1123 08:32:15.219027  318549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/embed-certs-329854/id_rsa Username:docker}
	I1123 08:32:15.225516  318549 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:32:15.225540  318549 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:32:15.225599  318549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-329854
	I1123 08:32:15.250951  318549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/embed-certs-329854/id_rsa Username:docker}
	I1123 08:32:15.259884  318549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:32:15.330669  318549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:32:15.373794  318549 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:32:15.402456  318549 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:32:15.549450  318549 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 08:32:15.550676  318549 node_ready.go:35] waiting up to 6m0s for node "embed-certs-329854" to be "Ready" ...
	I1123 08:32:15.811232  318549 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 08:32:14.297818  326134 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-589368 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:32:14.316606  326134 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:32:14.321176  326134 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:32:14.331789  326134 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-589368 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-589368 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:32:14.331893  326134 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:32:14.331935  326134 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:32:14.357362  326134 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:32:14.357385  326134 containerd.go:534] Images already preloaded, skipping extraction
	I1123 08:32:14.357437  326134 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:32:14.385849  326134 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:32:14.385878  326134 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:32:14.385887  326134 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1123 08:32:14.386008  326134 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-589368 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-589368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:32:14.386080  326134 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:32:14.416374  326134 cni.go:84] Creating CNI manager for ""
	I1123 08:32:14.416404  326134 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:32:14.416421  326134 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:32:14.416449  326134 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-589368 NodeName:default-k8s-diff-port-589368 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:32:14.416615  326134 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-589368"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:32:14.416688  326134 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:32:14.425909  326134 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:32:14.425995  326134 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:32:14.434360  326134 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1123 08:32:14.449035  326134 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:32:14.468244  326134 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1123 08:32:14.483002  326134 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:32:14.486912  326134 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:32:14.497851  326134 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:32:14.581410  326134 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:32:14.614517  326134 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368 for IP: 192.168.85.2
	I1123 08:32:14.614543  326134 certs.go:195] generating shared ca certs ...
	I1123 08:32:14.614563  326134 certs.go:227] acquiring lock for ca certs: {Name:mk76a9e50dc1d967f9b3db23534d451cf588eb45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:14.614740  326134 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-10922/.minikube/ca.key
	I1123 08:32:14.614805  326134 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-10922/.minikube/proxy-client-ca.key
	I1123 08:32:14.614818  326134 certs.go:257] generating profile certs ...
	I1123 08:32:14.614890  326134 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/client.key
	I1123 08:32:14.614908  326134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/client.crt with IP's: []
	I1123 08:32:14.647919  326134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/client.crt ...
	I1123 08:32:14.647953  326134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/client.crt: {Name:mkb3dce0606b4e20557ecb8120f9887326d0cf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:14.648166  326134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/client.key ...
	I1123 08:32:14.648190  326134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/client.key: {Name:mk629923008b86112793d0aec571412cc0ad28a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:14.648339  326134 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.key.6d2968c0
	I1123 08:32:14.648368  326134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.crt.6d2968c0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 08:32:14.686307  326134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.crt.6d2968c0 ...
	I1123 08:32:14.686332  326134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.crt.6d2968c0: {Name:mk1e00809095cee6e818a6e146ec68827bae6918 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:14.686524  326134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.key.6d2968c0 ...
	I1123 08:32:14.686541  326134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.key.6d2968c0: {Name:mkfb447eff51bae5aefaab05debc824049f50368 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:14.686645  326134 certs.go:382] copying /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.crt.6d2968c0 -> /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.crt
	I1123 08:32:14.686795  326134 certs.go:386] copying /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.key.6d2968c0 -> /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.key
	I1123 08:32:14.686890  326134 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/proxy-client.key
	I1123 08:32:14.686910  326134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/proxy-client.crt with IP's: []
	I1123 08:32:14.924216  326134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/proxy-client.crt ...
	I1123 08:32:14.924254  326134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/proxy-client.crt: {Name:mkb9386d64575dc8ac7523514efe858c9b7529d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:14.924454  326134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/proxy-client.key ...
	I1123 08:32:14.924476  326134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/proxy-client.key: {Name:mk8c2c1a6932591a0561992965f8ee7640d119a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:14.924758  326134 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10922/.minikube/certs/14479.pem (1338 bytes)
	W1123 08:32:14.924815  326134 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-10922/.minikube/certs/14479_empty.pem, impossibly tiny 0 bytes
	I1123 08:32:14.924829  326134 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:32:14.924868  326134 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:32:14.924905  326134 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10922/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:32:14.924936  326134 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10922/.minikube/certs/key.pem (1675 bytes)
	I1123 08:32:14.924997  326134 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10922/.minikube/files/etc/ssl/certs/144792.pem (1708 bytes)
	I1123 08:32:14.925655  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:32:14.945369  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:32:14.968340  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:32:14.988840  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:32:15.007674  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 08:32:15.029568  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:32:15.050180  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:32:15.068810  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 08:32:15.089087  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/files/etc/ssl/certs/144792.pem --> /usr/share/ca-certificates/144792.pem (1708 bytes)
	I1123 08:32:15.110582  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:32:15.133022  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/certs/14479.pem --> /usr/share/ca-certificates/14479.pem (1338 bytes)
	I1123 08:32:15.153948  326134 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:32:15.173239  326134 ssh_runner.go:195] Run: openssl version
	I1123 08:32:15.183240  326134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:32:15.194622  326134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:32:15.199634  326134 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:32:15.199779  326134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:32:15.258387  326134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:32:15.270123  326134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14479.pem && ln -fs /usr/share/ca-certificates/14479.pem /etc/ssl/certs/14479.pem"
	I1123 08:32:15.284696  326134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14479.pem
	I1123 08:32:15.292891  326134 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:01 /usr/share/ca-certificates/14479.pem
	I1123 08:32:15.292972  326134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14479.pem
	I1123 08:32:15.359192  326134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14479.pem /etc/ssl/certs/51391683.0"
	I1123 08:32:15.373656  326134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144792.pem && ln -fs /usr/share/ca-certificates/144792.pem /etc/ssl/certs/144792.pem"
	I1123 08:32:15.384091  326134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144792.pem
	I1123 08:32:15.388887  326134 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:01 /usr/share/ca-certificates/144792.pem
	I1123 08:32:15.388958  326134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144792.pem
	I1123 08:32:15.443323  326134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144792.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:32:15.460097  326134 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:32:15.467434  326134 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:32:15.467521  326134 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-589368 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-589368 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:32:15.467625  326134 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:32:15.467703  326134 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:32:15.516472  326134 cri.go:89] found id: ""
	I1123 08:32:15.516606  326134 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:32:15.530331  326134 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:32:15.540469  326134 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:32:15.540554  326134 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:32:15.550596  326134 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:32:15.550614  326134 kubeadm.go:158] found existing configuration files:
	
	I1123 08:32:15.550662  326134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1123 08:32:15.561320  326134 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:32:15.561383  326134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:32:15.572743  326134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1123 08:32:15.584338  326134 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:32:15.584407  326134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:32:15.595366  326134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1123 08:32:15.605467  326134 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:32:15.605547  326134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:32:15.615744  326134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1123 08:32:15.627425  326134 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:32:15.627492  326134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:32:15.638000  326134 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:32:15.688923  326134 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:32:15.688995  326134 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:32:15.720637  326134 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:32:15.720729  326134 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 08:32:15.720772  326134 kubeadm.go:319] OS: Linux
	I1123 08:32:15.720822  326134 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:32:15.721005  326134 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:32:15.721212  326134 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:32:15.721367  326134 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:32:15.721431  326134 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:32:15.721491  326134 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:32:15.721558  326134 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:32:15.721616  326134 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 08:32:15.812531  326134 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:32:15.812732  326134 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:32:15.812946  326134 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:32:15.819634  326134 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:32:12.399685  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:12.899290  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:13.399323  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:13.899054  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:14.399443  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:14.899288  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:15.399123  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:15.899731  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:15.999297  314870 kubeadm.go:1114] duration metric: took 5.199592792s to wait for elevateKubeSystemPrivileges
	I1123 08:32:15.999338  314870 kubeadm.go:403] duration metric: took 18.659838402s to StartCluster
	I1123 08:32:15.999359  314870 settings.go:142] acquiring lock: {Name:mk436e1608db541c991c29c7031bb6bf416025bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:15.999426  314870 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-10922/kubeconfig
	I1123 08:32:16.001957  314870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10922/kubeconfig: {Name:mk728060aa1e1ef3d8ab678673d9cf01ff53b55e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:16.002268  314870 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:32:16.002602  314870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:32:16.002804  314870 config.go:182] Loaded profile config "no-preload-073500": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:32:16.002850  314870 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:32:16.002929  314870 addons.go:70] Setting storage-provisioner=true in profile "no-preload-073500"
	I1123 08:32:16.002961  314870 addons.go:239] Setting addon storage-provisioner=true in "no-preload-073500"
	I1123 08:32:16.002980  314870 addons.go:70] Setting default-storageclass=true in profile "no-preload-073500"
	I1123 08:32:16.002987  314870 host.go:66] Checking if "no-preload-073500" exists ...
	I1123 08:32:16.003009  314870 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-073500"
	I1123 08:32:16.003383  314870 cli_runner.go:164] Run: docker container inspect no-preload-073500 --format={{.State.Status}}
	I1123 08:32:16.003539  314870 cli_runner.go:164] Run: docker container inspect no-preload-073500 --format={{.State.Status}}
	I1123 08:32:16.005225  314870 out.go:179] * Verifying Kubernetes components...
	I1123 08:32:16.006812  314870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:32:16.043915  314870 addons.go:239] Setting addon default-storageclass=true in "no-preload-073500"
	I1123 08:32:16.043964  314870 host.go:66] Checking if "no-preload-073500" exists ...
	I1123 08:32:16.044444  314870 cli_runner.go:164] Run: docker container inspect no-preload-073500 --format={{.State.Status}}
	I1123 08:32:16.044724  314870 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:32:16.048309  314870 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:32:16.048336  314870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:32:16.048406  314870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-073500
	I1123 08:32:16.071830  314870 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:32:16.071903  314870 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:32:16.071999  314870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-073500
	I1123 08:32:16.078085  314870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/no-preload-073500/id_rsa Username:docker}
	I1123 08:32:16.109699  314870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/no-preload-073500/id_rsa Username:docker}
	I1123 08:32:16.185396  314870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:32:16.227817  314870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:32:16.245006  314870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:32:16.281945  314870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:32:16.397837  314870 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1123 08:32:16.399402  314870 node_ready.go:35] waiting up to 6m0s for node "no-preload-073500" to be "Ready" ...
	I1123 08:32:16.610305  314870 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 08:32:16.611607  314870 addons.go:530] duration metric: took 608.755845ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:32:16.902546  314870 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-073500" context rescaled to 1 replicas
	I1123 08:32:15.821563  326134 out.go:252]   - Generating certificates and keys ...
	I1123 08:32:15.821663  326134 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:32:15.821733  326134 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:32:16.304138  326134 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:32:16.889295  326134 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:32:16.997677  326134 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:32:17.305154  326134 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:32:15.812955  318549 addons.go:530] duration metric: took 657.239994ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:32:16.055379  318549 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-329854" context rescaled to 1 replicas
	W1123 08:32:17.554653  318549 node_ready.go:57] node "embed-certs-329854" has "Ready":"False" status (will retry)
	I1123 08:32:17.593524  326134 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:32:17.593769  326134 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-589368 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:32:17.658683  326134 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:32:17.658875  326134 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-589368 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:32:17.804016  326134 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:32:18.000605  326134 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:32:18.144467  326134 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:32:18.144604  326134 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:32:18.405359  326134 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:32:18.470862  326134 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:32:18.624734  326134 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:32:19.017532  326134 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:32:19.456219  326134 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:32:19.456839  326134 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:32:19.461436  326134 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	73dd2722107d4       56cc512116c8f       8 seconds ago       Running             busybox                   0                   409a2ee88f516       busybox                                          default
	bc64aaf15fbe5       ead0a4a53df89       14 seconds ago      Running             coredns                   0                   4ffd31fa157ed       coredns-5dd5756b68-mwh86                         kube-system
	50f0601099a49       6e38f40d628db       14 seconds ago      Running             storage-provisioner       0                   48f513718684b       storage-provisioner                              kube-system
	62d2a524a89ee       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   705647d71054a       kindnet-lcz6v                                    kube-system
	1ed5b63781114       ea1030da44aa1       30 seconds ago      Running             kube-proxy                0                   f661b855b5cdf       kube-proxy-fjlft                                 kube-system
	d923b5213e8b3       4be79c38a4bab       48 seconds ago      Running             kube-controller-manager   0                   2965fc3ede0d5       kube-controller-manager-old-k8s-version-644335   kube-system
	b8ecb78185d1c       f6f496300a2ae       48 seconds ago      Running             kube-scheduler            0                   c585031e51bcb       kube-scheduler-old-k8s-version-644335            kube-system
	45f46799be931       73deb9a3f7025       48 seconds ago      Running             etcd                      0                   8007c3b8739b4       etcd-old-k8s-version-644335                      kube-system
	24fc7d3f2b9d3       bb5e0dde9054c       48 seconds ago      Running             kube-apiserver            0                   4a573e7a3f588       kube-apiserver-old-k8s-version-644335            kube-system
	
	
	==> containerd <==
	Nov 23 08:32:08 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:08.325596043Z" level=info msg="Container bc64aaf15fbe5e159c99b0d4e5fdad1163ad532a8e3e570d86e10dd4cd4eb626: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:32:08 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:08.327403635Z" level=info msg="StartContainer for \"50f0601099a49d8b3aa7aa3969c99085cf44a469a178e8abf8471bfa986a8a68\""
	Nov 23 08:32:08 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:08.331298096Z" level=info msg="connecting to shim 50f0601099a49d8b3aa7aa3969c99085cf44a469a178e8abf8471bfa986a8a68" address="unix:///run/containerd/s/78f21c230e1fb6b22bdbb486c34576bdd8e920366f7a88552dfed7aa5f553000" protocol=ttrpc version=3
	Nov 23 08:32:08 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:08.338880612Z" level=info msg="CreateContainer within sandbox \"4ffd31fa157eddcd6f9292a8c27313d69f986601a28013ab0c135931e1cba973\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc64aaf15fbe5e159c99b0d4e5fdad1163ad532a8e3e570d86e10dd4cd4eb626\""
	Nov 23 08:32:08 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:08.339903459Z" level=info msg="StartContainer for \"bc64aaf15fbe5e159c99b0d4e5fdad1163ad532a8e3e570d86e10dd4cd4eb626\""
	Nov 23 08:32:08 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:08.341217274Z" level=info msg="connecting to shim bc64aaf15fbe5e159c99b0d4e5fdad1163ad532a8e3e570d86e10dd4cd4eb626" address="unix:///run/containerd/s/fd9ebf2d67921a967fdc9a8838443eeac59b75e77d6d68d910cd26ba2583770b" protocol=ttrpc version=3
	Nov 23 08:32:08 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:08.415217475Z" level=info msg="StartContainer for \"bc64aaf15fbe5e159c99b0d4e5fdad1163ad532a8e3e570d86e10dd4cd4eb626\" returns successfully"
	Nov 23 08:32:08 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:08.415306861Z" level=info msg="StartContainer for \"50f0601099a49d8b3aa7aa3969c99085cf44a469a178e8abf8471bfa986a8a68\" returns successfully"
	Nov 23 08:32:11 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:11.512786868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:37e84c8a-3caa-4e37-9815-c33d14d90a29,Namespace:default,Attempt:0,}"
	Nov 23 08:32:11 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:11.553418847Z" level=info msg="connecting to shim 409a2ee88f5167730944a8fd2efa0563e6a063bd420bbaa66e0d23dd170a6937" address="unix:///run/containerd/s/f305f9ec82c2152d8e0b1c2b423bc40abad24b69084e4d7c8690dd0af6413061" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:32:11 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:11.628215817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:37e84c8a-3caa-4e37-9815-c33d14d90a29,Namespace:default,Attempt:0,} returns sandbox id \"409a2ee88f5167730944a8fd2efa0563e6a063bd420bbaa66e0d23dd170a6937\""
	Nov 23 08:32:11 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:11.630165738Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.856737250Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.857753641Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396643"
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.859243509Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.861610045Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.862077455Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.231862306s"
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.862115634Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.864291954Z" level=info msg="CreateContainer within sandbox \"409a2ee88f5167730944a8fd2efa0563e6a063bd420bbaa66e0d23dd170a6937\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.872244588Z" level=info msg="Container 73dd2722107d47288be1e4b164c5af81e9227f8f9bdb14a886ef57494f460595: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.878943379Z" level=info msg="CreateContainer within sandbox \"409a2ee88f5167730944a8fd2efa0563e6a063bd420bbaa66e0d23dd170a6937\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"73dd2722107d47288be1e4b164c5af81e9227f8f9bdb14a886ef57494f460595\""
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.879607281Z" level=info msg="StartContainer for \"73dd2722107d47288be1e4b164c5af81e9227f8f9bdb14a886ef57494f460595\""
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.880573129Z" level=info msg="connecting to shim 73dd2722107d47288be1e4b164c5af81e9227f8f9bdb14a886ef57494f460595" address="unix:///run/containerd/s/f305f9ec82c2152d8e0b1c2b423bc40abad24b69084e4d7c8690dd0af6413061" protocol=ttrpc version=3
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.933418274Z" level=info msg="StartContainer for \"73dd2722107d47288be1e4b164c5af81e9227f8f9bdb14a886ef57494f460595\" returns successfully"
	Nov 23 08:32:21 old-k8s-version-644335 containerd[666]: E1123 08:32:21.312077     666 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [bc64aaf15fbe5e159c99b0d4e5fdad1163ad532a8e3e570d86e10dd4cd4eb626] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37633 - 65137 "HINFO IN 9129358495986739779.2328090660322760570. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01918351s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-644335
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-644335
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=old-k8s-version-644335
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_31_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:31:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-644335
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:32:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:32:10 +0000   Sun, 23 Nov 2025 08:31:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:32:10 +0000   Sun, 23 Nov 2025 08:31:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:32:10 +0000   Sun, 23 Nov 2025 08:31:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:32:10 +0000   Sun, 23 Nov 2025 08:32:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-644335
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                a70afb12-85c0-4a98-8e1d-33bd0981eaa5
	  Boot ID:                    5380b858-5e3f-4ab2-b78d-8704cd56a682
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-mwh86                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     31s
	  kube-system                 etcd-old-k8s-version-644335                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         43s
	  kube-system                 kindnet-lcz6v                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-old-k8s-version-644335             250m (3%)     0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-controller-manager-old-k8s-version-644335    200m (2%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-fjlft                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-old-k8s-version-644335             100m (1%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 30s   kube-proxy       
	  Normal  Starting                 43s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  43s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  43s   kubelet          Node old-k8s-version-644335 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s   kubelet          Node old-k8s-version-644335 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s   kubelet          Node old-k8s-version-644335 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s   node-controller  Node old-k8s-version-644335 event: Registered Node old-k8s-version-644335 in Controller
	  Normal  NodeReady                16s   kubelet          Node old-k8s-version-644335 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 7d 09 6f 5f 2b 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 d4 5e e6 42 49 08 06
	[ +11.373766] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 a4 f8 6b 15 37 08 06
	[  +0.013916] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 7c 96 ae 15 dc 08 06
	[ +40.470104] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 69 b6 fd a9 36 08 06
	[  +0.167388] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 d3 04 3f 4c f4 08 06
	[  +2.400864] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 01 20 fe a4 35 08 06
	[  +0.000386] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 7c 96 ae 15 dc 08 06
	[  +5.210763] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 68 6e 21 c9 1f 08 06
	[Nov23 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a c0 03 9d 77 98 08 06
	[  +0.000409] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 68 6e 21 c9 1f 08 06
	[ +19.602508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 9b 99 36 e6 f4 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e 69 b6 fd a9 36 08 06
	
	
	==> etcd [45f46799be93103538a709214de68c0cfbbf97b2984f32cac94f7d09dc881032] <==
	{"level":"info","ts":"2025-11-23T08:31:50.375808Z","caller":"traceutil/trace.go:171","msg":"trace[1731385581] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:285; }","duration":"133.686427ms","start":"2025-11-23T08:31:50.242084Z","end":"2025-11-23T08:31:50.375771Z","steps":["trace[1731385581] 'agreement among raft nodes before linearized reading'  (duration: 133.406024ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:31:50.543068Z","caller":"traceutil/trace.go:171","msg":"trace[1083647756] transaction","detail":"{read_only:false; response_revision:288; number_of_response:1; }","duration":"124.895436ms","start":"2025-11-23T08:31:50.41815Z","end":"2025-11-23T08:31:50.543045Z","steps":["trace[1083647756] 'process raft request'  (duration: 122.203936ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:06.815255Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"270.879635ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766361280699337 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-5dd5756b68-mwh86\" mod_revision:345 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-5dd5756b68-mwh86\" value_size:3753 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-5dd5756b68-mwh86\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T08:32:06.815491Z","caller":"traceutil/trace.go:171","msg":"trace[764994583] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"337.630134ms","start":"2025-11-23T08:32:06.47784Z","end":"2025-11-23T08:32:06.81547Z","steps":["trace[764994583] 'process raft request'  (duration: 337.507841ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:06.815604Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-23T08:32:06.47782Z","time spent":"337.730787ms","remote":"127.0.0.1:46012","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2776,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:373 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:2722 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"info","ts":"2025-11-23T08:32:06.815726Z","caller":"traceutil/trace.go:171","msg":"trace[653677693] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"341.506856ms","start":"2025-11-23T08:32:06.474196Z","end":"2025-11-23T08:32:06.815703Z","steps":["trace[653677693] 'process raft request'  (duration: 69.997725ms)","trace[653677693] 'compare'  (duration: 270.78569ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:32:06.815805Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-23T08:32:06.474175Z","time spent":"341.601245ms","remote":"127.0.0.1:46012","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3812,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-5dd5756b68-mwh86\" mod_revision:345 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-5dd5756b68-mwh86\" value_size:3753 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-5dd5756b68-mwh86\" > >"}
	{"level":"info","ts":"2025-11-23T08:32:06.978858Z","caller":"traceutil/trace.go:171","msg":"trace[1104013326] linearizableReadLoop","detail":"{readStateIndex:408; appliedIndex:407; }","duration":"153.807411ms","start":"2025-11-23T08:32:06.825027Z","end":"2025-11-23T08:32:06.978835Z","steps":["trace[1104013326] 'read index received'  (duration: 126.893155ms)","trace[1104013326] 'applied index is now lower than readState.Index'  (duration: 26.91341ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:32:06.978893Z","caller":"traceutil/trace.go:171","msg":"trace[1774032522] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"155.455842ms","start":"2025-11-23T08:32:06.823409Z","end":"2025-11-23T08:32:06.978865Z","steps":["trace[1774032522] 'process raft request'  (duration: 128.518156ms)","trace[1774032522] 'compare'  (duration: 26.787664ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:32:06.97905Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.337268ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T08:32:06.979151Z","caller":"traceutil/trace.go:171","msg":"trace[367819674] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:392; }","duration":"115.452675ms","start":"2025-11-23T08:32:06.863687Z","end":"2025-11-23T08:32:06.97914Z","steps":["trace[367819674] 'agreement among raft nodes before linearized reading'  (duration: 115.275981ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:06.979074Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.057518ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-mwh86\" ","response":"range_response_count:1 size:3827"}
	{"level":"info","ts":"2025-11-23T08:32:06.979238Z","caller":"traceutil/trace.go:171","msg":"trace[657616236] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-mwh86; range_end:; response_count:1; response_revision:392; }","duration":"154.225473ms","start":"2025-11-23T08:32:06.824998Z","end":"2025-11-23T08:32:06.979224Z","steps":["trace[657616236] 'agreement among raft nodes before linearized reading'  (duration: 153.963673ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:07.221098Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.176439ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766361280699345 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-5dd5756b68-mwh86\" mod_revision:389 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-5dd5756b68-mwh86\" value_size:4635 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-5dd5756b68-mwh86\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T08:32:07.221196Z","caller":"traceutil/trace.go:171","msg":"trace[10519771] linearizableReadLoop","detail":"{readStateIndex:410; appliedIndex:409; }","duration":"219.765585ms","start":"2025-11-23T08:32:07.001416Z","end":"2025-11-23T08:32:07.221181Z","steps":["trace[10519771] 'read index received'  (duration: 110.39588ms)","trace[10519771] 'applied index is now lower than readState.Index'  (duration: 109.368598ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:32:07.221414Z","caller":"traceutil/trace.go:171","msg":"trace[288254356] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"235.559152ms","start":"2025-11-23T08:32:06.985843Z","end":"2025-11-23T08:32:07.221403Z","steps":["trace[288254356] 'process raft request'  (duration: 126.008432ms)","trace[288254356] 'compare'  (duration: 109.100658ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:32:07.221637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.237552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/\" range_end:\"/registry/serviceaccounts/default0\" ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-11-23T08:32:07.221676Z","caller":"traceutil/trace.go:171","msg":"trace[2038075669] range","detail":"{range_begin:/registry/serviceaccounts/default/; range_end:/registry/serviceaccounts/default0; response_count:1; response_revision:394; }","duration":"220.283438ms","start":"2025-11-23T08:32:07.001382Z","end":"2025-11-23T08:32:07.221665Z","steps":["trace[2038075669] 'agreement among raft nodes before linearized reading'  (duration: 220.198091ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:07.221836Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.431881ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:721"}
	{"level":"info","ts":"2025-11-23T08:32:07.221869Z","caller":"traceutil/trace.go:171","msg":"trace[47038502] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:394; }","duration":"151.463285ms","start":"2025-11-23T08:32:07.070396Z","end":"2025-11-23T08:32:07.22186Z","steps":["trace[47038502] 'agreement among raft nodes before linearized reading'  (duration: 151.405582ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:07.2221Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.610453ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2025-11-23T08:32:07.222135Z","caller":"traceutil/trace.go:171","msg":"trace[324676363] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:394; }","duration":"151.648054ms","start":"2025-11-23T08:32:07.070481Z","end":"2025-11-23T08:32:07.222129Z","steps":["trace[324676363] 'agreement among raft nodes before linearized reading'  (duration: 151.58707ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.375582Z","caller":"traceutil/trace.go:171","msg":"trace[1417634899] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"124.663097ms","start":"2025-11-23T08:32:07.250901Z","end":"2025-11-23T08:32:07.375564Z","steps":["trace[1417634899] 'process raft request'  (duration: 124.509404ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:07.642408Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.527999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:8 size:41134"}
	{"level":"info","ts":"2025-11-23T08:32:07.642473Z","caller":"traceutil/trace.go:171","msg":"trace[1290776648] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:8; response_revision:395; }","duration":"164.607893ms","start":"2025-11-23T08:32:07.477852Z","end":"2025-11-23T08:32:07.64246Z","steps":["trace[1290776648] 'range keys from in-memory index tree'  (duration: 164.355272ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:32:22 up  1:14,  0 user,  load average: 4.92, 3.81, 2.47
	Linux old-k8s-version-644335 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [62d2a524a89eea1a86a495032f44dc4fcf3f295b6a37b5252e52f94b48d1d408] <==
	I1123 08:31:55.985760       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:31:55.986183       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1123 08:31:55.986889       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:31:55.986985       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:31:55.987037       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:31:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:31:56.284272       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:31:56.284302       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:31:56.284313       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:31:56.306661       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:31:56.684624       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:31:56.706097       1 metrics.go:72] Registering metrics
	I1123 08:31:56.706300       1 controller.go:711] "Syncing nftables rules"
	I1123 08:32:06.290264       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 08:32:06.290315       1 main.go:301] handling current node
	I1123 08:32:16.284771       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 08:32:16.284832       1 main.go:301] handling current node
	
	
	==> kube-apiserver [24fc7d3f2b9d331ec00ee0d24edc912d0f9231bf439464acf39ca1b352dbd9ae] <==
	I1123 08:31:35.987204       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1123 08:31:35.987463       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 08:31:35.987866       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 08:31:35.987904       1 aggregator.go:166] initial CRD sync complete...
	I1123 08:31:35.987913       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 08:31:35.987964       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:31:35.987989       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:31:35.988740       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1123 08:31:35.990046       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 08:31:36.184555       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:31:36.894578       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:31:36.898290       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:31:36.898313       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:31:37.418846       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:31:37.463855       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:31:37.604156       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:31:37.611375       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1123 08:31:37.612546       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 08:31:37.617251       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:31:37.947132       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 08:31:38.954904       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 08:31:38.975029       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:31:38.988942       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1123 08:31:51.510002       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 08:31:51.707000       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [d923b5213e8b3d8af44cbc9fa87cabbfe8d1fb7bc4713bfb51ef49ec43be859f] <==
	I1123 08:31:50.945363       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1123 08:31:50.950847       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 08:31:51.002028       1 shared_informer.go:318] Caches are synced for disruption
	I1123 08:31:51.318133       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:31:51.318170       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 08:31:51.354770       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:31:51.517923       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1123 08:31:51.718486       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fjlft"
	I1123 08:31:51.721754       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-lcz6v"
	I1123 08:31:51.821332       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-jvkkt"
	I1123 08:31:51.842261       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-mwh86"
	I1123 08:31:51.861032       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="344.408288ms"
	I1123 08:31:51.877129       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.028296ms"
	I1123 08:31:51.902023       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="24.822241ms"
	I1123 08:31:51.902333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="182.107µs"
	I1123 08:31:52.329982       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1123 08:31:52.342048       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-jvkkt"
	I1123 08:31:52.351470       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.923596ms"
	I1123 08:31:52.359652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.080621ms"
	I1123 08:31:52.360693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="563.114µs"
	I1123 08:32:06.818990       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.609µs"
	I1123 08:32:07.223787       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="119.532µs"
	I1123 08:32:09.303387       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.481025ms"
	I1123 08:32:09.303531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.899µs"
	I1123 08:32:10.919115       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [1ed5b637811145d9fff3280921703c00b63f4c3b1c52c2f4c5440f7cd29f382f] <==
	I1123 08:31:52.411015       1 server_others.go:69] "Using iptables proxy"
	I1123 08:31:52.425293       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1123 08:31:52.450105       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:31:52.453169       1 server_others.go:152] "Using iptables Proxier"
	I1123 08:31:52.453286       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 08:31:52.453303       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 08:31:52.453343       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 08:31:52.453700       1 server.go:846] "Version info" version="v1.28.0"
	I1123 08:31:52.453721       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:31:52.454600       1 config.go:97] "Starting endpoint slice config controller"
	I1123 08:31:52.454743       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 08:31:52.454776       1 config.go:188] "Starting service config controller"
	I1123 08:31:52.454782       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 08:31:52.454667       1 config.go:315] "Starting node config controller"
	I1123 08:31:52.454794       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 08:31:52.554947       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1123 08:31:52.555084       1 shared_informer.go:318] Caches are synced for service config
	I1123 08:31:52.555678       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b8ecb78185d1c1fe98dec4b47de4066108120b3e315e7bbf74d7a4cc46af1cf0] <==
	W1123 08:31:35.950545       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1123 08:31:35.950610       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1123 08:31:36.785890       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1123 08:31:36.785936       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1123 08:31:36.788876       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1123 08:31:36.788913       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1123 08:31:36.809968       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1123 08:31:36.810019       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1123 08:31:36.913744       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1123 08:31:36.913787       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1123 08:31:37.072670       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1123 08:31:37.072723       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1123 08:31:37.079728       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1123 08:31:37.079769       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1123 08:31:37.113046       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1123 08:31:37.113122       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1123 08:31:37.132201       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1123 08:31:37.132261       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1123 08:31:37.158811       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1123 08:31:37.158853       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1123 08:31:37.183654       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1123 08:31:37.183698       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1123 08:31:37.332055       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1123 08:31:37.332096       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1123 08:31:39.848190       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 08:31:50 old-k8s-version-644335 kubelet[1500]: I1123 08:31:50.851565    1500 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:31:51 old-k8s-version-644335 kubelet[1500]: I1123 08:31:51.727543    1500 topology_manager.go:215] "Topology Admit Handler" podUID="43a841de-4dd0-46a2-aae4-901399aa0515" podNamespace="kube-system" podName="kube-proxy-fjlft"
	Nov 23 08:31:51 old-k8s-version-644335 kubelet[1500]: I1123 08:31:51.732323    1500 topology_manager.go:215] "Topology Admit Handler" podUID="eac5ce99-6c74-46c9-a0c0-a595c22303e4" podNamespace="kube-system" podName="kindnet-lcz6v"
	Nov 23 08:31:51 old-k8s-version-644335 kubelet[1500]: I1123 08:31:51.758902    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43a841de-4dd0-46a2-aae4-901399aa0515-kube-proxy\") pod \"kube-proxy-fjlft\" (UID: \"43a841de-4dd0-46a2-aae4-901399aa0515\") " pod="kube-system/kube-proxy-fjlft"
	Nov 23 08:31:51 old-k8s-version-644335 kubelet[1500]: I1123 08:31:51.759010    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eac5ce99-6c74-46c9-a0c0-a595c22303e4-xtables-lock\") pod \"kindnet-lcz6v\" (UID: \"eac5ce99-6c74-46c9-a0c0-a595c22303e4\") " pod="kube-system/kindnet-lcz6v"
	Nov 23 08:31:51 old-k8s-version-644335 kubelet[1500]: I1123 08:31:51.759065    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcrlc\" (UniqueName: \"kubernetes.io/projected/eac5ce99-6c74-46c9-a0c0-a595c22303e4-kube-api-access-fcrlc\") pod \"kindnet-lcz6v\" (UID: \"eac5ce99-6c74-46c9-a0c0-a595c22303e4\") " pod="kube-system/kindnet-lcz6v"
	Nov 23 08:31:51 old-k8s-version-644335 kubelet[1500]: I1123 08:31:51.759106    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43a841de-4dd0-46a2-aae4-901399aa0515-lib-modules\") pod \"kube-proxy-fjlft\" (UID: \"43a841de-4dd0-46a2-aae4-901399aa0515\") " pod="kube-system/kube-proxy-fjlft"
	Nov 23 08:31:51 old-k8s-version-644335 kubelet[1500]: I1123 08:31:51.759135    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5k64\" (UniqueName: \"kubernetes.io/projected/43a841de-4dd0-46a2-aae4-901399aa0515-kube-api-access-m5k64\") pod \"kube-proxy-fjlft\" (UID: \"43a841de-4dd0-46a2-aae4-901399aa0515\") " pod="kube-system/kube-proxy-fjlft"
	Nov 23 08:31:51 old-k8s-version-644335 kubelet[1500]: I1123 08:31:51.759163    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eac5ce99-6c74-46c9-a0c0-a595c22303e4-lib-modules\") pod \"kindnet-lcz6v\" (UID: \"eac5ce99-6c74-46c9-a0c0-a595c22303e4\") " pod="kube-system/kindnet-lcz6v"
	Nov 23 08:31:51 old-k8s-version-644335 kubelet[1500]: I1123 08:31:51.759211    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43a841de-4dd0-46a2-aae4-901399aa0515-xtables-lock\") pod \"kube-proxy-fjlft\" (UID: \"43a841de-4dd0-46a2-aae4-901399aa0515\") " pod="kube-system/kube-proxy-fjlft"
	Nov 23 08:31:51 old-k8s-version-644335 kubelet[1500]: I1123 08:31:51.759242    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eac5ce99-6c74-46c9-a0c0-a595c22303e4-cni-cfg\") pod \"kindnet-lcz6v\" (UID: \"eac5ce99-6c74-46c9-a0c0-a595c22303e4\") " pod="kube-system/kindnet-lcz6v"
	Nov 23 08:31:56 old-k8s-version-644335 kubelet[1500]: I1123 08:31:56.239581    1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fjlft" podStartSLOduration=5.239491113 podCreationTimestamp="2025-11-23 08:31:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:31:53.233477489 +0000 UTC m=+14.309128463" watchObservedRunningTime="2025-11-23 08:31:56.239491113 +0000 UTC m=+17.315142087"
	Nov 23 08:31:56 old-k8s-version-644335 kubelet[1500]: I1123 08:31:56.239759    1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-lcz6v" podStartSLOduration=2.078949801 podCreationTimestamp="2025-11-23 08:31:51 +0000 UTC" firstStartedPulling="2025-11-23 08:31:52.421191166 +0000 UTC m=+13.496842130" lastFinishedPulling="2025-11-23 08:31:55.581967422 +0000 UTC m=+16.657618387" observedRunningTime="2025-11-23 08:31:56.239122504 +0000 UTC m=+17.314773487" watchObservedRunningTime="2025-11-23 08:31:56.239726058 +0000 UTC m=+17.315377029"
	Nov 23 08:32:06 old-k8s-version-644335 kubelet[1500]: I1123 08:32:06.386223    1500 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 23 08:32:06 old-k8s-version-644335 kubelet[1500]: I1123 08:32:06.822222    1500 topology_manager.go:215] "Topology Admit Handler" podUID="fbe4548b-bcc9-427c-afbe-4a04f65d1997" podNamespace="kube-system" podName="coredns-5dd5756b68-mwh86"
	Nov 23 08:32:06 old-k8s-version-644335 kubelet[1500]: I1123 08:32:06.825399    1500 topology_manager.go:215] "Topology Admit Handler" podUID="8bbfd059-0548-413b-bc78-b5b6446505a3" podNamespace="kube-system" podName="storage-provisioner"
	Nov 23 08:32:06 old-k8s-version-644335 kubelet[1500]: I1123 08:32:06.967962    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8bbfd059-0548-413b-bc78-b5b6446505a3-tmp\") pod \"storage-provisioner\" (UID: \"8bbfd059-0548-413b-bc78-b5b6446505a3\") " pod="kube-system/storage-provisioner"
	Nov 23 08:32:06 old-k8s-version-644335 kubelet[1500]: I1123 08:32:06.968019    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbe4548b-bcc9-427c-afbe-4a04f65d1997-config-volume\") pod \"coredns-5dd5756b68-mwh86\" (UID: \"fbe4548b-bcc9-427c-afbe-4a04f65d1997\") " pod="kube-system/coredns-5dd5756b68-mwh86"
	Nov 23 08:32:06 old-k8s-version-644335 kubelet[1500]: I1123 08:32:06.968147    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z62f\" (UniqueName: \"kubernetes.io/projected/8bbfd059-0548-413b-bc78-b5b6446505a3-kube-api-access-4z62f\") pod \"storage-provisioner\" (UID: \"8bbfd059-0548-413b-bc78-b5b6446505a3\") " pod="kube-system/storage-provisioner"
	Nov 23 08:32:06 old-k8s-version-644335 kubelet[1500]: I1123 08:32:06.968201    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr8xn\" (UniqueName: \"kubernetes.io/projected/fbe4548b-bcc9-427c-afbe-4a04f65d1997-kube-api-access-kr8xn\") pod \"coredns-5dd5756b68-mwh86\" (UID: \"fbe4548b-bcc9-427c-afbe-4a04f65d1997\") " pod="kube-system/coredns-5dd5756b68-mwh86"
	Nov 23 08:32:09 old-k8s-version-644335 kubelet[1500]: I1123 08:32:09.289054    1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-mwh86" podStartSLOduration=18.288991864 podCreationTimestamp="2025-11-23 08:31:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:09.288745026 +0000 UTC m=+30.364395999" watchObservedRunningTime="2025-11-23 08:32:09.288991864 +0000 UTC m=+30.364642833"
	Nov 23 08:32:09 old-k8s-version-644335 kubelet[1500]: I1123 08:32:09.289205    1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=17.289173734 podCreationTimestamp="2025-11-23 08:31:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:09.274686083 +0000 UTC m=+30.350337055" watchObservedRunningTime="2025-11-23 08:32:09.289173734 +0000 UTC m=+30.364824706"
	Nov 23 08:32:11 old-k8s-version-644335 kubelet[1500]: I1123 08:32:11.201560    1500 topology_manager.go:215] "Topology Admit Handler" podUID="37e84c8a-3caa-4e37-9815-c33d14d90a29" podNamespace="default" podName="busybox"
	Nov 23 08:32:11 old-k8s-version-644335 kubelet[1500]: I1123 08:32:11.395725    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnt7m\" (UniqueName: \"kubernetes.io/projected/37e84c8a-3caa-4e37-9815-c33d14d90a29-kube-api-access-bnt7m\") pod \"busybox\" (UID: \"37e84c8a-3caa-4e37-9815-c33d14d90a29\") " pod="default/busybox"
	Nov 23 08:32:14 old-k8s-version-644335 kubelet[1500]: I1123 08:32:14.288604    1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.055523078 podCreationTimestamp="2025-11-23 08:32:11 +0000 UTC" firstStartedPulling="2025-11-23 08:32:11.629681818 +0000 UTC m=+32.705332772" lastFinishedPulling="2025-11-23 08:32:13.86254197 +0000 UTC m=+34.938192935" observedRunningTime="2025-11-23 08:32:14.288364717 +0000 UTC m=+35.364015689" watchObservedRunningTime="2025-11-23 08:32:14.288383241 +0000 UTC m=+35.364034209"
	
	
	==> storage-provisioner [50f0601099a49d8b3aa7aa3969c99085cf44a469a178e8abf8471bfa986a8a68] <==
	I1123 08:32:08.428251       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:32:08.443328       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:32:08.443473       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 08:32:08.453585       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:32:08.453709       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-644335_5327fecd-767a-4068-8265-dc7d74cde00f!
	I1123 08:32:08.453833       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2326ae7d-273a-46ce-b18b-ec889e34408f", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-644335_5327fecd-767a-4068-8265-dc7d74cde00f became leader
	I1123 08:32:08.554756       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-644335_5327fecd-767a-4068-8265-dc7d74cde00f!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-644335 -n old-k8s-version-644335
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-644335 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-644335
helpers_test.go:243: (dbg) docker inspect old-k8s-version-644335:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7f1745895cd2daf384493bf619c7d82fd6d9b63f3e54969e6aa984818e599090",
	        "Created": "2025-11-23T08:31:21.747802519Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 307672,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:31:21.797762818Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/7f1745895cd2daf384493bf619c7d82fd6d9b63f3e54969e6aa984818e599090/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7f1745895cd2daf384493bf619c7d82fd6d9b63f3e54969e6aa984818e599090/hostname",
	        "HostsPath": "/var/lib/docker/containers/7f1745895cd2daf384493bf619c7d82fd6d9b63f3e54969e6aa984818e599090/hosts",
	        "LogPath": "/var/lib/docker/containers/7f1745895cd2daf384493bf619c7d82fd6d9b63f3e54969e6aa984818e599090/7f1745895cd2daf384493bf619c7d82fd6d9b63f3e54969e6aa984818e599090-json.log",
	        "Name": "/old-k8s-version-644335",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-644335:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-644335",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7f1745895cd2daf384493bf619c7d82fd6d9b63f3e54969e6aa984818e599090",
	                "LowerDir": "/var/lib/docker/overlay2/7e9348bdbf7fe5caf38705598582fde24d8735176b764198431205917d3b77af-init/diff:/var/lib/docker/overlay2/f8ae64c4d7d1e12e69b7d69a01d34a96c2f353aeac48a9b438b028f010c32149/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7e9348bdbf7fe5caf38705598582fde24d8735176b764198431205917d3b77af/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7e9348bdbf7fe5caf38705598582fde24d8735176b764198431205917d3b77af/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7e9348bdbf7fe5caf38705598582fde24d8735176b764198431205917d3b77af/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-644335",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-644335/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-644335",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-644335",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-644335",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "dea5f6418c9754475e9b8008038381cb3d815f711e582f84688bee584969dd23",
	            "SandboxKey": "/var/run/docker/netns/dea5f6418c97",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-644335": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c091c62244399da24761260bbfae0c90eec0be75c4b7689a6c70a071c5b8f23e",
	                    "EndpointID": "ab0acae5c0b37e265556d281ecf3291a32c77d757fcd3abbe1b2799b8bbf788b",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "1a:e6:a3:19:3e:4b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-644335",
	                        "7f1745895cd2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-644335 -n old-k8s-version-644335
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-644335 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-644335 logs -n 25: (1.232967184s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-366757 sudo systemctl cat kubelet --no-pager                                                                                                                         │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                          │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                         │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /var/lib/kubelet/config.yaml                                                                                                                         │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl status docker --all --full --no-pager                                                                                                          │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl cat docker --no-pager                                                                                                                          │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/docker/daemon.json                                                                                                                              │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo docker system info                                                                                                                                       │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl status cri-docker --all --full --no-pager                                                                                                      │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl cat cri-docker --no-pager                                                                                                                      │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                 │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cri-dockerd --version                                                                                                                                    │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl status containerd --all --full --no-pager                                                                                                      │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl cat containerd --no-pager                                                                                                                      │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /lib/systemd/system/containerd.service                                                                                                               │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/containerd/config.toml                                                                                                                          │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo containerd config dump                                                                                                                                   │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl status crio --all --full --no-pager                                                                                                            │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl cat crio --no-pager                                                                                                                            │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                  │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo crio config                                                                                                                                              │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ delete  │ -p bridge-366757                                                                                                                                                               │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:32 UTC │
	│ delete  │ -p disable-driver-mounts-900754                                                                                                                                                │ disable-driver-mounts-900754 │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ start   │ -p default-k8s-diff-port-589368 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ default-k8s-diff-port-589368 │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:32:02
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:32:02.365149  326134 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:32:02.365430  326134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:32:02.365439  326134 out.go:374] Setting ErrFile to fd 2...
	I1123 08:32:02.365444  326134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:32:02.365688  326134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
	I1123 08:32:02.366222  326134 out.go:368] Setting JSON to false
	I1123 08:32:02.367361  326134 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4460,"bootTime":1763882262,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:32:02.367415  326134 start.go:143] virtualization: kvm guest
	I1123 08:32:02.369623  326134 out.go:179] * [default-k8s-diff-port-589368] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:32:02.370841  326134 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:32:02.370904  326134 notify.go:221] Checking for updates...
	I1123 08:32:02.373164  326134 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:32:02.374347  326134 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig
	I1123 08:32:02.375434  326134 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube
	I1123 08:32:02.376528  326134 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:32:02.377529  326134 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:32:02.378965  326134 config.go:182] Loaded profile config "embed-certs-329854": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:32:02.379086  326134 config.go:182] Loaded profile config "no-preload-073500": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:32:02.379154  326134 config.go:182] Loaded profile config "old-k8s-version-644335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:32:02.379256  326134 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:32:02.407081  326134 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:32:02.407244  326134 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:32:02.472040  326134 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-23 08:32:02.45999755 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:32:02.472162  326134 docker.go:319] overlay module found
	I1123 08:32:02.474068  326134 out.go:179] * Using the docker driver based on user configuration
	I1123 08:32:02.475288  326134 start.go:309] selected driver: docker
	I1123 08:32:02.475306  326134 start.go:927] validating driver "docker" against <nil>
	I1123 08:32:02.475318  326134 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:32:02.476049  326134 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:32:02.538637  326134 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-23 08:32:02.527137057 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:32:02.538955  326134 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:32:02.539261  326134 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:32:02.540977  326134 out.go:179] * Using Docker driver with root privileges
	I1123 08:32:02.542238  326134 cni.go:84] Creating CNI manager for ""
	I1123 08:32:02.542329  326134 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:32:02.542344  326134 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:32:02.542437  326134 start.go:353] cluster config:
	{Name:default-k8s-diff-port-589368 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-589368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:32:02.543888  326134 out.go:179] * Starting "default-k8s-diff-port-589368" primary control-plane node in "default-k8s-diff-port-589368" cluster
	I1123 08:32:02.544940  326134 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:32:02.546095  326134 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:32:02.547277  326134 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:32:02.547320  326134 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1123 08:32:02.547343  326134 cache.go:65] Caching tarball of preloaded images
	I1123 08:32:02.547394  326134 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:32:02.547459  326134 preload.go:238] Found /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 08:32:02.547475  326134 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 08:32:02.547640  326134 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/config.json ...
	I1123 08:32:02.547678  326134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/config.json: {Name:mk1f809d1452f95feae198ba9c84eb715cc0365a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:02.572550  326134 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:32:02.572569  326134 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:32:02.572586  326134 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:32:02.572623  326134 start.go:360] acquireMachinesLock for default-k8s-diff-port-589368: {Name:mk824e721d9528bfc83f46b2967dfcdfbed28a63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:32:02.572732  326134 start.go:364] duration metric: took 90.345µs to acquireMachinesLock for "default-k8s-diff-port-589368"
	I1123 08:32:02.572763  326134 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-589368 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-589368 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disabl
eCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:32:02.572832  326134 start.go:125] createHost starting for "" (driver="docker")
	W1123 08:32:00.803173  306530 node_ready.go:57] node "old-k8s-version-644335" has "Ready":"False" status (will retry)
	W1123 08:32:03.302835  306530 node_ready.go:57] node "old-k8s-version-644335" has "Ready":"False" status (will retry)
	I1123 08:32:02.574971  326134 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:32:02.575244  326134 start.go:159] libmachine.API.Create for "default-k8s-diff-port-589368" (driver="docker")
	I1123 08:32:02.575290  326134 client.go:173] LocalClient.Create starting
	I1123 08:32:02.575395  326134 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca.pem
	I1123 08:32:02.575452  326134 main.go:143] libmachine: Decoding PEM data...
	I1123 08:32:02.575478  326134 main.go:143] libmachine: Parsing certificate...
	I1123 08:32:02.575588  326134 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10922/.minikube/certs/cert.pem
	I1123 08:32:02.575620  326134 main.go:143] libmachine: Decoding PEM data...
	I1123 08:32:02.575639  326134 main.go:143] libmachine: Parsing certificate...
	I1123 08:32:02.576024  326134 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-589368 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:32:02.594523  326134 cli_runner.go:211] docker network inspect default-k8s-diff-port-589368 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:32:02.594623  326134 network_create.go:284] running [docker network inspect default-k8s-diff-port-589368] to gather additional debugging logs...
	I1123 08:32:02.594647  326134 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-589368
	W1123 08:32:02.612207  326134 cli_runner.go:211] docker network inspect default-k8s-diff-port-589368 returned with exit code 1
	I1123 08:32:02.612243  326134 network_create.go:287] error running [docker network inspect default-k8s-diff-port-589368]: docker network inspect default-k8s-diff-port-589368: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-589368 not found
	I1123 08:32:02.612266  326134 network_create.go:289] output of [docker network inspect default-k8s-diff-port-589368]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-589368 not found
	
	** /stderr **
	I1123 08:32:02.612490  326134 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:32:02.630896  326134 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-88eb84305350 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:b0:8c:95:93:f7} reservation:<nil>}
	I1123 08:32:02.631571  326134 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d1d9c6d8034d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:6f:a4:bc:f0:ec} reservation:<nil>}
	I1123 08:32:02.632272  326134 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9a1acaa7a50f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:4f:2c:f2:7e:e0} reservation:<nil>}
	I1123 08:32:02.632976  326134 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4bf2fad4a2d5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:eb:26:95:d5:87} reservation:<nil>}
	I1123 08:32:02.633817  326134 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f8c9f0}
	I1123 08:32:02.633845  326134 network_create.go:124] attempt to create docker network default-k8s-diff-port-589368 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 08:32:02.633926  326134 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-589368 default-k8s-diff-port-589368
	I1123 08:32:02.687114  326134 network_create.go:108] docker network default-k8s-diff-port-589368 192.168.85.0/24 created
	I1123 08:32:02.687145  326134 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-589368" container
	I1123 08:32:02.687215  326134 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:32:02.707082  326134 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-589368 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-589368 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:32:02.726188  326134 oci.go:103] Successfully created a docker volume default-k8s-diff-port-589368
	I1123 08:32:02.726303  326134 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-589368-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-589368 --entrypoint /usr/bin/test -v default-k8s-diff-port-589368:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:32:03.147122  326134 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-589368
	I1123 08:32:03.147192  326134 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:32:03.147205  326134 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:32:03.147259  326134 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-589368:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1123 08:32:05.804418  306530 node_ready.go:57] node "old-k8s-version-644335" has "Ready":"False" status (will retry)
	I1123 08:32:06.817045  306530 node_ready.go:49] node "old-k8s-version-644335" is "Ready"
	I1123 08:32:06.817079  306530 node_ready.go:38] duration metric: took 14.518019801s for node "old-k8s-version-644335" to be "Ready" ...
	I1123 08:32:06.817095  306530 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:32:06.817160  306530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:32:06.856324  306530 api_server.go:72] duration metric: took 15.083460409s to wait for apiserver process to appear ...
	I1123 08:32:06.856432  306530 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:32:06.856532  306530 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 08:32:06.980123  306530 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 08:32:06.981554  306530 api_server.go:141] control plane version: v1.28.0
	I1123 08:32:06.981588  306530 api_server.go:131] duration metric: took 125.088431ms to wait for apiserver health ...
	I1123 08:32:06.981600  306530 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:32:06.998688  306530 system_pods.go:59] 8 kube-system pods found
	I1123 08:32:06.998904  306530 system_pods.go:61] "coredns-5dd5756b68-mwh86" [fbe4548b-bcc9-427c-afbe-4a04f65d1997] Pending
	I1123 08:32:06.998916  306530 system_pods.go:61] "etcd-old-k8s-version-644335" [35aad5ae-42b1-45e7-aa99-73095dc89d5a] Running
	I1123 08:32:06.998922  306530 system_pods.go:61] "kindnet-lcz6v" [eac5ce99-6c74-46c9-a0c0-a595c22303e4] Running
	I1123 08:32:06.998927  306530 system_pods.go:61] "kube-apiserver-old-k8s-version-644335" [9ae9ecef-e1d0-4002-832e-4f0ef5a9645b] Running
	I1123 08:32:06.998933  306530 system_pods.go:61] "kube-controller-manager-old-k8s-version-644335" [0ff36cb3-558f-49d7-bf48-a5645f4e575f] Running
	I1123 08:32:06.998937  306530 system_pods.go:61] "kube-proxy-fjlft" [43a841de-4dd0-46a2-aae4-901399aa0515] Running
	I1123 08:32:06.998943  306530 system_pods.go:61] "kube-scheduler-old-k8s-version-644335" [ae490e16-ca14-4c51-a024-be0735700ea6] Running
	I1123 08:32:06.998947  306530 system_pods.go:61] "storage-provisioner" [8bbfd059-0548-413b-bc78-b5b6446505a3] Pending
	I1123 08:32:06.998954  306530 system_pods.go:74] duration metric: took 17.347129ms to wait for pod list to return data ...
	I1123 08:32:06.998974  306530 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:32:07.226094  306530 default_sa.go:45] found service account: "default"
	I1123 08:32:07.226239  306530 default_sa.go:55] duration metric: took 227.257372ms for default service account to be created ...
	I1123 08:32:07.226264  306530 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:32:07.246837  306530 system_pods.go:86] 8 kube-system pods found
	I1123 08:32:07.246885  306530 system_pods.go:89] "coredns-5dd5756b68-mwh86" [fbe4548b-bcc9-427c-afbe-4a04f65d1997] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:32:07.246895  306530 system_pods.go:89] "etcd-old-k8s-version-644335" [35aad5ae-42b1-45e7-aa99-73095dc89d5a] Running
	I1123 08:32:07.246902  306530 system_pods.go:89] "kindnet-lcz6v" [eac5ce99-6c74-46c9-a0c0-a595c22303e4] Running
	I1123 08:32:07.246908  306530 system_pods.go:89] "kube-apiserver-old-k8s-version-644335" [9ae9ecef-e1d0-4002-832e-4f0ef5a9645b] Running
	I1123 08:32:07.246914  306530 system_pods.go:89] "kube-controller-manager-old-k8s-version-644335" [0ff36cb3-558f-49d7-bf48-a5645f4e575f] Running
	I1123 08:32:07.246919  306530 system_pods.go:89] "kube-proxy-fjlft" [43a841de-4dd0-46a2-aae4-901399aa0515] Running
	I1123 08:32:07.246924  306530 system_pods.go:89] "kube-scheduler-old-k8s-version-644335" [ae490e16-ca14-4c51-a024-be0735700ea6] Running
	I1123 08:32:07.246928  306530 system_pods.go:89] "storage-provisioner" [8bbfd059-0548-413b-bc78-b5b6446505a3] Pending
	I1123 08:32:07.246952  306530 retry.go:31] will retry after 228.830225ms: missing components: kube-dns
	I1123 08:32:07.657267  306530 system_pods.go:86] 8 kube-system pods found
	I1123 08:32:07.657320  306530 system_pods.go:89] "coredns-5dd5756b68-mwh86" [fbe4548b-bcc9-427c-afbe-4a04f65d1997] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:32:07.657329  306530 system_pods.go:89] "etcd-old-k8s-version-644335" [35aad5ae-42b1-45e7-aa99-73095dc89d5a] Running
	I1123 08:32:07.657338  306530 system_pods.go:89] "kindnet-lcz6v" [eac5ce99-6c74-46c9-a0c0-a595c22303e4] Running
	I1123 08:32:07.657344  306530 system_pods.go:89] "kube-apiserver-old-k8s-version-644335" [9ae9ecef-e1d0-4002-832e-4f0ef5a9645b] Running
	I1123 08:32:07.657355  306530 system_pods.go:89] "kube-controller-manager-old-k8s-version-644335" [0ff36cb3-558f-49d7-bf48-a5645f4e575f] Running
	I1123 08:32:07.657359  306530 system_pods.go:89] "kube-proxy-fjlft" [43a841de-4dd0-46a2-aae4-901399aa0515] Running
	I1123 08:32:07.657364  306530 system_pods.go:89] "kube-scheduler-old-k8s-version-644335" [ae490e16-ca14-4c51-a024-be0735700ea6] Running
	I1123 08:32:07.657375  306530 system_pods.go:89] "storage-provisioner" [8bbfd059-0548-413b-bc78-b5b6446505a3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:32:07.657393  306530 retry.go:31] will retry after 330.888352ms: missing components: kube-dns
	I1123 08:32:07.993569  306530 system_pods.go:86] 8 kube-system pods found
	I1123 08:32:07.993616  306530 system_pods.go:89] "coredns-5dd5756b68-mwh86" [fbe4548b-bcc9-427c-afbe-4a04f65d1997] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:32:07.993626  306530 system_pods.go:89] "etcd-old-k8s-version-644335" [35aad5ae-42b1-45e7-aa99-73095dc89d5a] Running
	I1123 08:32:07.993632  306530 system_pods.go:89] "kindnet-lcz6v" [eac5ce99-6c74-46c9-a0c0-a595c22303e4] Running
	I1123 08:32:07.993638  306530 system_pods.go:89] "kube-apiserver-old-k8s-version-644335" [9ae9ecef-e1d0-4002-832e-4f0ef5a9645b] Running
	I1123 08:32:07.993645  306530 system_pods.go:89] "kube-controller-manager-old-k8s-version-644335" [0ff36cb3-558f-49d7-bf48-a5645f4e575f] Running
	I1123 08:32:07.993650  306530 system_pods.go:89] "kube-proxy-fjlft" [43a841de-4dd0-46a2-aae4-901399aa0515] Running
	I1123 08:32:07.993658  306530 system_pods.go:89] "kube-scheduler-old-k8s-version-644335" [ae490e16-ca14-4c51-a024-be0735700ea6] Running
	I1123 08:32:07.993666  306530 system_pods.go:89] "storage-provisioner" [8bbfd059-0548-413b-bc78-b5b6446505a3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:32:07.993685  306530 retry.go:31] will retry after 309.599463ms: missing components: kube-dns
	I1123 08:32:08.311383  306530 system_pods.go:86] 8 kube-system pods found
	I1123 08:32:08.311430  306530 system_pods.go:89] "coredns-5dd5756b68-mwh86" [fbe4548b-bcc9-427c-afbe-4a04f65d1997] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:32:08.311439  306530 system_pods.go:89] "etcd-old-k8s-version-644335" [35aad5ae-42b1-45e7-aa99-73095dc89d5a] Running
	I1123 08:32:08.311447  306530 system_pods.go:89] "kindnet-lcz6v" [eac5ce99-6c74-46c9-a0c0-a595c22303e4] Running
	I1123 08:32:08.311452  306530 system_pods.go:89] "kube-apiserver-old-k8s-version-644335" [9ae9ecef-e1d0-4002-832e-4f0ef5a9645b] Running
	I1123 08:32:08.311459  306530 system_pods.go:89] "kube-controller-manager-old-k8s-version-644335" [0ff36cb3-558f-49d7-bf48-a5645f4e575f] Running
	I1123 08:32:08.311464  306530 system_pods.go:89] "kube-proxy-fjlft" [43a841de-4dd0-46a2-aae4-901399aa0515] Running
	I1123 08:32:08.311469  306530 system_pods.go:89] "kube-scheduler-old-k8s-version-644335" [ae490e16-ca14-4c51-a024-be0735700ea6] Running
	I1123 08:32:08.311476  306530 system_pods.go:89] "storage-provisioner" [8bbfd059-0548-413b-bc78-b5b6446505a3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:32:08.311495  306530 retry.go:31] will retry after 394.800609ms: missing components: kube-dns
	I1123 08:32:08.713374  306530 system_pods.go:86] 8 kube-system pods found
	I1123 08:32:08.713421  306530 system_pods.go:89] "coredns-5dd5756b68-mwh86" [fbe4548b-bcc9-427c-afbe-4a04f65d1997] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:32:08.713431  306530 system_pods.go:89] "etcd-old-k8s-version-644335" [35aad5ae-42b1-45e7-aa99-73095dc89d5a] Running
	I1123 08:32:08.713437  306530 system_pods.go:89] "kindnet-lcz6v" [eac5ce99-6c74-46c9-a0c0-a595c22303e4] Running
	I1123 08:32:08.713443  306530 system_pods.go:89] "kube-apiserver-old-k8s-version-644335" [9ae9ecef-e1d0-4002-832e-4f0ef5a9645b] Running
	I1123 08:32:08.713449  306530 system_pods.go:89] "kube-controller-manager-old-k8s-version-644335" [0ff36cb3-558f-49d7-bf48-a5645f4e575f] Running
	I1123 08:32:08.713454  306530 system_pods.go:89] "kube-proxy-fjlft" [43a841de-4dd0-46a2-aae4-901399aa0515] Running
	I1123 08:32:08.713459  306530 system_pods.go:89] "kube-scheduler-old-k8s-version-644335" [ae490e16-ca14-4c51-a024-be0735700ea6] Running
	I1123 08:32:08.713467  306530 system_pods.go:89] "storage-provisioner" [8bbfd059-0548-413b-bc78-b5b6446505a3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:32:08.713486  306530 retry.go:31] will retry after 626.925144ms: missing components: kube-dns
	I1123 08:32:09.345080  306530 system_pods.go:86] 8 kube-system pods found
	I1123 08:32:09.345107  306530 system_pods.go:89] "coredns-5dd5756b68-mwh86" [fbe4548b-bcc9-427c-afbe-4a04f65d1997] Running
	I1123 08:32:09.345113  306530 system_pods.go:89] "etcd-old-k8s-version-644335" [35aad5ae-42b1-45e7-aa99-73095dc89d5a] Running
	I1123 08:32:09.345116  306530 system_pods.go:89] "kindnet-lcz6v" [eac5ce99-6c74-46c9-a0c0-a595c22303e4] Running
	I1123 08:32:09.345120  306530 system_pods.go:89] "kube-apiserver-old-k8s-version-644335" [9ae9ecef-e1d0-4002-832e-4f0ef5a9645b] Running
	I1123 08:32:09.345124  306530 system_pods.go:89] "kube-controller-manager-old-k8s-version-644335" [0ff36cb3-558f-49d7-bf48-a5645f4e575f] Running
	I1123 08:32:09.345127  306530 system_pods.go:89] "kube-proxy-fjlft" [43a841de-4dd0-46a2-aae4-901399aa0515] Running
	I1123 08:32:09.345132  306530 system_pods.go:89] "kube-scheduler-old-k8s-version-644335" [ae490e16-ca14-4c51-a024-be0735700ea6] Running
	I1123 08:32:09.345135  306530 system_pods.go:89] "storage-provisioner" [8bbfd059-0548-413b-bc78-b5b6446505a3] Running
	I1123 08:32:09.345143  306530 system_pods.go:126] duration metric: took 2.118863007s to wait for k8s-apps to be running ...
	I1123 08:32:09.345152  306530 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:32:09.345191  306530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:32:09.358626  306530 system_svc.go:56] duration metric: took 13.466479ms WaitForService to wait for kubelet
	I1123 08:32:09.358657  306530 kubeadm.go:587] duration metric: took 17.585799049s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:32:09.358677  306530 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:32:09.361987  306530 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:32:09.362020  306530 node_conditions.go:123] node cpu capacity is 8
	I1123 08:32:09.362035  306530 node_conditions.go:105] duration metric: took 3.353601ms to run NodePressure ...
	I1123 08:32:09.362047  306530 start.go:242] waiting for startup goroutines ...
	I1123 08:32:09.362054  306530 start.go:247] waiting for cluster config update ...
	I1123 08:32:09.362064  306530 start.go:256] writing updated cluster config ...
	I1123 08:32:09.362362  306530 ssh_runner.go:195] Run: rm -f paused
	I1123 08:32:09.366401  306530 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:32:09.371069  306530 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-mwh86" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:09.376095  306530 pod_ready.go:94] pod "coredns-5dd5756b68-mwh86" is "Ready"
	I1123 08:32:09.376151  306530 pod_ready.go:86] duration metric: took 5.053235ms for pod "coredns-5dd5756b68-mwh86" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:09.378964  306530 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-644335" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:09.383429  306530 pod_ready.go:94] pod "etcd-old-k8s-version-644335" is "Ready"
	I1123 08:32:09.383449  306530 pod_ready.go:86] duration metric: took 4.460707ms for pod "etcd-old-k8s-version-644335" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:09.386372  306530 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-644335" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:09.390805  306530 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-644335" is "Ready"
	I1123 08:32:09.390831  306530 pod_ready.go:86] duration metric: took 4.429668ms for pod "kube-apiserver-old-k8s-version-644335" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:09.393674  306530 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-644335" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:09.771706  306530 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-644335" is "Ready"
	I1123 08:32:09.771737  306530 pod_ready.go:86] duration metric: took 378.038443ms for pod "kube-controller-manager-old-k8s-version-644335" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:09.972722  306530 pod_ready.go:83] waiting for pod "kube-proxy-fjlft" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:10.370981  306530 pod_ready.go:94] pod "kube-proxy-fjlft" is "Ready"
	I1123 08:32:10.371006  306530 pod_ready.go:86] duration metric: took 398.257667ms for pod "kube-proxy-fjlft" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:10.514640  314870 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:32:10.514754  314870 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:32:10.514892  314870 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:32:10.514978  314870 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 08:32:10.515028  314870 kubeadm.go:319] OS: Linux
	I1123 08:32:10.515136  314870 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:32:10.515203  314870 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:32:10.515270  314870 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:32:10.515345  314870 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:32:10.515430  314870 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:32:10.515522  314870 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:32:10.515590  314870 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:32:10.515669  314870 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 08:32:10.515775  314870 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:32:10.515911  314870 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:32:10.516048  314870 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:32:10.516136  314870 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:32:10.517774  314870 out.go:252]   - Generating certificates and keys ...
	I1123 08:32:10.517847  314870 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:32:10.517953  314870 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:32:10.518052  314870 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:32:10.518126  314870 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:32:10.518203  314870 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:32:10.518277  314870 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:32:10.518349  314870 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:32:10.518490  314870 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-073500] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 08:32:10.518572  314870 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:32:10.518711  314870 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-073500] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 08:32:10.518798  314870 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:32:10.518889  314870 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:32:10.518956  314870 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:32:10.519026  314870 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:32:10.519092  314870 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:32:10.519170  314870 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:32:10.519256  314870 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:32:10.519376  314870 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:32:10.519464  314870 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:32:10.519584  314870 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:32:10.519663  314870 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:32:10.521011  314870 out.go:252]   - Booting up control plane ...
	I1123 08:32:10.521091  314870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:32:10.521156  314870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:32:10.521244  314870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:32:10.521374  314870 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:32:10.521480  314870 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:32:10.521611  314870 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:32:10.521695  314870 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:32:10.521736  314870 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:32:10.521854  314870 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:32:10.522003  314870 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:32:10.522095  314870 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001353803s
	I1123 08:32:10.522185  314870 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:32:10.522259  314870 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1123 08:32:10.522341  314870 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:32:10.522412  314870 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:32:10.522480  314870 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.699829875s
	I1123 08:32:10.522563  314870 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.799265887s
	I1123 08:32:10.522638  314870 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.002757713s
	I1123 08:32:10.522745  314870 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:32:10.522843  314870 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:32:10.522890  314870 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:32:10.523072  314870 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-073500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:32:10.523130  314870 kubeadm.go:319] [bootstrap-token] Using token: xbt0ca.7qstjhvsu0orvs9m
	I1123 08:32:10.524469  314870 out.go:252]   - Configuring RBAC rules ...
	I1123 08:32:10.524592  314870 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:32:10.524669  314870 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:32:10.524799  314870 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:32:10.524929  314870 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:32:10.525032  314870 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:32:10.525108  314870 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:32:10.525210  314870 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:32:10.525263  314870 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:32:10.525303  314870 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:32:10.525309  314870 kubeadm.go:319] 
	I1123 08:32:10.525395  314870 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:32:10.525405  314870 kubeadm.go:319] 
	I1123 08:32:10.525484  314870 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:32:10.525494  314870 kubeadm.go:319] 
	I1123 08:32:10.525553  314870 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:32:10.525650  314870 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:32:10.525770  314870 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:32:10.525784  314870 kubeadm.go:319] 
	I1123 08:32:10.525865  314870 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:32:10.525874  314870 kubeadm.go:319] 
	I1123 08:32:10.525921  314870 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:32:10.525933  314870 kubeadm.go:319] 
	I1123 08:32:10.525982  314870 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:32:10.526060  314870 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:32:10.526157  314870 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:32:10.526174  314870 kubeadm.go:319] 
	I1123 08:32:10.526255  314870 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:32:10.526369  314870 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:32:10.526377  314870 kubeadm.go:319] 
	I1123 08:32:10.526463  314870 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xbt0ca.7qstjhvsu0orvs9m \
	I1123 08:32:10.526577  314870 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:54663b9a07b99dc9bb266865529fcac752142d486218fea7481ff08893e16d79 \
	I1123 08:32:10.526598  314870 kubeadm.go:319] 	--control-plane 
	I1123 08:32:10.526614  314870 kubeadm.go:319] 
	I1123 08:32:10.526738  314870 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:32:10.526747  314870 kubeadm.go:319] 
	I1123 08:32:10.526858  314870 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xbt0ca.7qstjhvsu0orvs9m \
	I1123 08:32:10.527006  314870 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:54663b9a07b99dc9bb266865529fcac752142d486218fea7481ff08893e16d79 
	I1123 08:32:10.527037  314870 cni.go:84] Creating CNI manager for ""
	I1123 08:32:10.527048  314870 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:32:10.528491  314870 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:32:10.661882  318549 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:32:10.661958  318549 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:32:10.662066  318549 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:32:10.662134  318549 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 08:32:10.662176  318549 kubeadm.go:319] OS: Linux
	I1123 08:32:10.662232  318549 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:32:10.662287  318549 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:32:10.662345  318549 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:32:10.662402  318549 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:32:10.662459  318549 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:32:10.662604  318549 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:32:10.662668  318549 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:32:10.662736  318549 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 08:32:10.662825  318549 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:32:10.662948  318549 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:32:10.663057  318549 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:32:10.663137  318549 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:32:10.668048  318549 out.go:252]   - Generating certificates and keys ...
	I1123 08:32:10.668161  318549 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:32:10.668265  318549 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:32:10.668376  318549 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:32:10.668466  318549 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:32:10.668599  318549 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:32:10.668690  318549 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:32:10.668764  318549 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:32:10.668934  318549 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-329854 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:32:10.669038  318549 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:32:10.669238  318549 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-329854 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:32:10.669308  318549 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:32:10.669362  318549 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:32:10.669428  318549 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:32:10.669526  318549 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:32:10.669617  318549 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:32:10.669694  318549 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:32:10.669801  318549 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:32:10.669902  318549 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:32:10.670012  318549 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:32:10.670109  318549 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:32:10.670186  318549 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:32:10.671689  318549 out.go:252]   - Booting up control plane ...
	I1123 08:32:10.671802  318549 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:32:10.671886  318549 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:32:10.671975  318549 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:32:10.672136  318549 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:32:10.672270  318549 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:32:10.672398  318549 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:32:10.672485  318549 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:32:10.672541  318549 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:32:10.672685  318549 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:32:10.672807  318549 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:32:10.672896  318549 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.51456ms
	I1123 08:32:10.673028  318549 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:32:10.673147  318549 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 08:32:10.673274  318549 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:32:10.673393  318549 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:32:10.673520  318549 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.640714346s
	I1123 08:32:10.673611  318549 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.942673779s
	I1123 08:32:10.673699  318549 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001521455s
	I1123 08:32:10.673831  318549 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:32:10.674022  318549 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:32:10.674075  318549 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:32:10.674431  318549 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-329854 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:32:10.674559  318549 kubeadm.go:319] [bootstrap-token] Using token: k7cb1u.d38rcugcduwh7x1h
	I1123 08:32:10.677042  318549 out.go:252]   - Configuring RBAC rules ...
	I1123 08:32:10.677186  318549 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:32:10.677286  318549 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:32:10.677457  318549 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:32:10.677653  318549 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:32:10.677835  318549 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:32:10.677966  318549 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:32:10.678148  318549 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:32:10.678214  318549 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:32:10.678277  318549 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:32:10.678291  318549 kubeadm.go:319] 
	I1123 08:32:10.678372  318549 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:32:10.678383  318549 kubeadm.go:319] 
	I1123 08:32:10.678499  318549 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:32:10.678523  318549 kubeadm.go:319] 
	I1123 08:32:10.678562  318549 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:32:10.678627  318549 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:32:10.678691  318549 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:32:10.678699  318549 kubeadm.go:319] 
	I1123 08:32:10.678782  318549 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:32:10.678793  318549 kubeadm.go:319] 
	I1123 08:32:10.678855  318549 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:32:10.678864  318549 kubeadm.go:319] 
	I1123 08:32:10.678952  318549 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:32:10.679062  318549 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:32:10.679153  318549 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:32:10.679161  318549 kubeadm.go:319] 
	I1123 08:32:10.679258  318549 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:32:10.679363  318549 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:32:10.679372  318549 kubeadm.go:319] 
	I1123 08:32:10.679480  318549 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token k7cb1u.d38rcugcduwh7x1h \
	I1123 08:32:10.679652  318549 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:54663b9a07b99dc9bb266865529fcac752142d486218fea7481ff08893e16d79 \
	I1123 08:32:10.679685  318549 kubeadm.go:319] 	--control-plane 
	I1123 08:32:10.679690  318549 kubeadm.go:319] 
	I1123 08:32:10.679806  318549 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:32:10.679822  318549 kubeadm.go:319] 
	I1123 08:32:10.679966  318549 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token k7cb1u.d38rcugcduwh7x1h \
	I1123 08:32:10.680132  318549 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:54663b9a07b99dc9bb266865529fcac752142d486218fea7481ff08893e16d79 
	I1123 08:32:10.680155  318549 cni.go:84] Creating CNI manager for ""
	I1123 08:32:10.680163  318549 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:32:10.681571  318549 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:32:10.571886  306530 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-644335" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:10.971385  306530 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-644335" is "Ready"
	I1123 08:32:10.971415  306530 pod_ready.go:86] duration metric: took 399.502745ms for pod "kube-scheduler-old-k8s-version-644335" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:32:10.971430  306530 pod_ready.go:40] duration metric: took 1.604993054s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:32:11.033661  306530 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1123 08:32:11.035051  306530 out.go:203] 
	W1123 08:32:11.036353  306530 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 08:32:11.039086  306530 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 08:32:11.040450  306530 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-644335" cluster and "default" namespace by default
	I1123 08:32:10.529719  314870 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:32:10.534944  314870 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:32:10.534964  314870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:32:10.549433  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:32:10.799697  314870 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:32:10.799770  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:10.799853  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-073500 minikube.k8s.io/updated_at=2025_11_23T08_32_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=no-preload-073500 minikube.k8s.io/primary=true
	I1123 08:32:10.898855  314870 ops.go:34] apiserver oom_adj: -16
	I1123 08:32:10.899004  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:11.399695  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:11.899300  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:08.170638  326134 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-589368:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.023218769s)
	I1123 08:32:08.170700  326134 kic.go:203] duration metric: took 5.023489314s to extract preloaded images to volume ...
	W1123 08:32:08.170828  326134 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 08:32:08.170891  326134 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 08:32:08.170957  326134 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:32:08.263601  326134 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-589368 --name default-k8s-diff-port-589368 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-589368 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-589368 --network default-k8s-diff-port-589368 --ip 192.168.85.2 --volume default-k8s-diff-port-589368:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:32:08.663695  326134 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-589368 --format={{.State.Running}}
	I1123 08:32:08.688914  326134 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-589368 --format={{.State.Status}}
	I1123 08:32:08.716117  326134 cli_runner.go:164] Run: docker exec default-k8s-diff-port-589368 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:32:08.771536  326134 oci.go:144] the created container "default-k8s-diff-port-589368" has a running status.
	I1123 08:32:08.771571  326134 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-10922/.minikube/machines/default-k8s-diff-port-589368/id_rsa...
	I1123 08:32:08.842617  326134 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-10922/.minikube/machines/default-k8s-diff-port-589368/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:32:08.874417  326134 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-589368 --format={{.State.Status}}
	I1123 08:32:08.899344  326134 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:32:08.899370  326134 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-589368 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:32:08.979387  326134 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-589368 --format={{.State.Status}}
	I1123 08:32:09.003807  326134 machine.go:94] provisionDockerMachine start ...
	I1123 08:32:09.003983  326134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-589368
	I1123 08:32:09.033752  326134 main.go:143] libmachine: Using SSH client type: native
	I1123 08:32:09.034122  326134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 08:32:09.034138  326134 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:32:09.035254  326134 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48000->127.0.0.1:33108: read: connection reset by peer
	I1123 08:32:12.184579  326134 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-589368
	
	I1123 08:32:12.184605  326134 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-589368"
	I1123 08:32:12.184787  326134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-589368
	I1123 08:32:12.204947  326134 main.go:143] libmachine: Using SSH client type: native
	I1123 08:32:12.205165  326134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 08:32:12.205178  326134 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-589368 && echo "default-k8s-diff-port-589368" | sudo tee /etc/hostname
	I1123 08:32:12.364428  326134 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-589368
	
	I1123 08:32:12.364496  326134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-589368
	I1123 08:32:10.682639  318549 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:32:10.687459  318549 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:32:10.687477  318549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:32:10.701352  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:32:10.977767  318549 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:32:10.977927  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-329854 minikube.k8s.io/updated_at=2025_11_23T08_32_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=embed-certs-329854 minikube.k8s.io/primary=true
	I1123 08:32:10.978075  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:10.993796  318549 ops.go:34] apiserver oom_adj: -16
	I1123 08:32:11.078883  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:11.579842  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:12.079241  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:12.579599  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:13.079939  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:13.579129  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:12.384269  326134 main.go:143] libmachine: Using SSH client type: native
	I1123 08:32:12.384542  326134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 08:32:12.384562  326134 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-589368' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-589368/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-589368' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:32:12.531200  326134 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:32:12.531235  326134 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10922/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10922/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10922/.minikube}
	I1123 08:32:12.531271  326134 ubuntu.go:190] setting up certificates
	I1123 08:32:12.531290  326134 provision.go:84] configureAuth start
	I1123 08:32:12.531361  326134 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-589368
	I1123 08:32:12.549866  326134 provision.go:143] copyHostCerts
	I1123 08:32:12.549946  326134 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10922/.minikube/ca.pem, removing ...
	I1123 08:32:12.549963  326134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10922/.minikube/ca.pem
	I1123 08:32:12.550040  326134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10922/.minikube/ca.pem (1078 bytes)
	I1123 08:32:12.550152  326134 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10922/.minikube/cert.pem, removing ...
	I1123 08:32:12.550166  326134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10922/.minikube/cert.pem
	I1123 08:32:12.550224  326134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10922/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10922/.minikube/cert.pem (1123 bytes)
	I1123 08:32:12.550308  326134 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10922/.minikube/key.pem, removing ...
	I1123 08:32:12.550319  326134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10922/.minikube/key.pem
	I1123 08:32:12.550356  326134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10922/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10922/.minikube/key.pem (1675 bytes)
	I1123 08:32:12.550440  326134 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10922/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-589368 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-589368 localhost minikube]
	I1123 08:32:12.630027  326134 provision.go:177] copyRemoteCerts
	I1123 08:32:12.630087  326134 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:32:12.630122  326134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-589368
	I1123 08:32:12.651460  326134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/default-k8s-diff-port-589368/id_rsa Username:docker}
	I1123 08:32:12.754829  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:32:12.774851  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 08:32:12.793762  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:32:12.812458  326134 provision.go:87] duration metric: took 281.153863ms to configureAuth
	I1123 08:32:12.812486  326134 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:32:12.812713  326134 config.go:182] Loaded profile config "default-k8s-diff-port-589368": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:32:12.812728  326134 machine.go:97] duration metric: took 3.808874609s to provisionDockerMachine
	I1123 08:32:12.812737  326134 client.go:176] duration metric: took 10.237434724s to LocalClient.Create
	I1123 08:32:12.812760  326134 start.go:167] duration metric: took 10.237519395s to libmachine.API.Create "default-k8s-diff-port-589368"
	I1123 08:32:12.812772  326134 start.go:293] postStartSetup for "default-k8s-diff-port-589368" (driver="docker")
	I1123 08:32:12.812783  326134 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:32:12.812843  326134 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:32:12.812958  326134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-589368
	I1123 08:32:12.832166  326134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/default-k8s-diff-port-589368/id_rsa Username:docker}
	I1123 08:32:12.937078  326134 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:32:12.941011  326134 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:32:12.941047  326134 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:32:12.941061  326134 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10922/.minikube/addons for local assets ...
	I1123 08:32:12.941116  326134 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10922/.minikube/files for local assets ...
	I1123 08:32:12.941234  326134 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-10922/.minikube/files/etc/ssl/certs/144792.pem -> 144792.pem in /etc/ssl/certs
	I1123 08:32:12.941372  326134 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:32:12.950015  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/files/etc/ssl/certs/144792.pem --> /etc/ssl/certs/144792.pem (1708 bytes)
	I1123 08:32:12.972737  326134 start.go:296] duration metric: took 159.951896ms for postStartSetup
	I1123 08:32:12.973091  326134 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-589368
	I1123 08:32:12.991848  326134 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/config.json ...
	I1123 08:32:12.992096  326134 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:32:12.992133  326134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-589368
	I1123 08:32:13.010214  326134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/default-k8s-diff-port-589368/id_rsa Username:docker}
	I1123 08:32:13.111184  326134 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:32:13.117057  326134 start.go:128] duration metric: took 10.544210299s to createHost
	I1123 08:32:13.117083  326134 start.go:83] releasing machines lock for "default-k8s-diff-port-589368", held for 10.544335962s
	I1123 08:32:13.117159  326134 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-589368
	I1123 08:32:13.138821  326134 ssh_runner.go:195] Run: cat /version.json
	I1123 08:32:13.138876  326134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-589368
	I1123 08:32:13.138898  326134 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:32:13.139000  326134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-589368
	I1123 08:32:13.159900  326134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/default-k8s-diff-port-589368/id_rsa Username:docker}
	I1123 08:32:13.160675  326134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/default-k8s-diff-port-589368/id_rsa Username:docker}
	I1123 08:32:13.319429  326134 ssh_runner.go:195] Run: systemctl --version
	I1123 08:32:13.326550  326134 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:32:13.331385  326134 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:32:13.331460  326134 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:32:13.358246  326134 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 08:32:13.358269  326134 start.go:496] detecting cgroup driver to use...
	I1123 08:32:13.358306  326134 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:32:13.358360  326134 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:32:13.375851  326134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:32:13.388488  326134 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:32:13.388557  326134 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:32:13.405861  326134 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:32:13.426538  326134 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:32:13.517053  326134 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:32:13.614675  326134 docker.go:234] disabling docker service ...
	I1123 08:32:13.614753  326134 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:32:13.636054  326134 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:32:13.650186  326134 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:32:13.740250  326134 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:32:13.843726  326134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:32:13.858236  326134 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:32:13.875875  326134 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 08:32:13.887815  326134 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:32:13.899519  326134 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1123 08:32:13.899579  326134 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1123 08:32:13.910780  326134 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:32:13.923659  326134 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:32:13.936812  326134 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:32:13.947028  326134 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:32:13.955978  326134 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:32:13.967808  326134 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:32:13.979257  326134 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:32:13.989005  326134 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:32:13.997178  326134 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:32:14.005215  326134 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:32:14.085715  326134 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:32:14.211871  326134 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:32:14.211935  326134 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:32:14.216762  326134 start.go:564] Will wait 60s for crictl version
	I1123 08:32:14.216825  326134 ssh_runner.go:195] Run: which crictl
	I1123 08:32:14.221323  326134 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:32:14.248285  326134 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:32:14.248347  326134 ssh_runner.go:195] Run: containerd --version
	I1123 08:32:14.272256  326134 ssh_runner.go:195] Run: containerd --version
	I1123 08:32:14.296476  326134 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 08:32:14.079520  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:14.579711  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:15.079695  318549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:15.153588  318549 kubeadm.go:1114] duration metric: took 4.175594558s to wait for elevateKubeSystemPrivileges
	I1123 08:32:15.153626  318549 kubeadm.go:403] duration metric: took 16.859259885s to StartCluster
	I1123 08:32:15.153647  318549 settings.go:142] acquiring lock: {Name:mk436e1608db541c991c29c7031bb6bf416025bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:15.153728  318549 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-10922/kubeconfig
	I1123 08:32:15.155269  318549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10922/kubeconfig: {Name:mk728060aa1e1ef3d8ab678673d9cf01ff53b55e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:15.155565  318549 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:32:15.155691  318549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:32:15.155714  318549 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:32:15.155816  318549 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-329854"
	I1123 08:32:15.155840  318549 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-329854"
	I1123 08:32:15.155876  318549 host.go:66] Checking if "embed-certs-329854" exists ...
	I1123 08:32:15.155920  318549 config.go:182] Loaded profile config "embed-certs-329854": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:32:15.155982  318549 addons.go:70] Setting default-storageclass=true in profile "embed-certs-329854"
	I1123 08:32:15.156001  318549 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-329854"
	I1123 08:32:15.156259  318549 cli_runner.go:164] Run: docker container inspect embed-certs-329854 --format={{.State.Status}}
	I1123 08:32:15.156418  318549 cli_runner.go:164] Run: docker container inspect embed-certs-329854 --format={{.State.Status}}
	I1123 08:32:15.161020  318549 out.go:179] * Verifying Kubernetes components...
	I1123 08:32:15.162691  318549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:32:15.183365  318549 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:32:15.184604  318549 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:32:15.184635  318549 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:32:15.184694  318549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-329854
	I1123 08:32:15.184807  318549 addons.go:239] Setting addon default-storageclass=true in "embed-certs-329854"
	I1123 08:32:15.184853  318549 host.go:66] Checking if "embed-certs-329854" exists ...
	I1123 08:32:15.185323  318549 cli_runner.go:164] Run: docker container inspect embed-certs-329854 --format={{.State.Status}}
	I1123 08:32:15.219027  318549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/embed-certs-329854/id_rsa Username:docker}
	I1123 08:32:15.225516  318549 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:32:15.225540  318549 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:32:15.225599  318549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-329854
	I1123 08:32:15.250951  318549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/embed-certs-329854/id_rsa Username:docker}
	I1123 08:32:15.259884  318549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:32:15.330669  318549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:32:15.373794  318549 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:32:15.402456  318549 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:32:15.549450  318549 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 08:32:15.550676  318549 node_ready.go:35] waiting up to 6m0s for node "embed-certs-329854" to be "Ready" ...
	I1123 08:32:15.811232  318549 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 08:32:14.297818  326134 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-589368 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:32:14.316606  326134 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:32:14.321176  326134 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:32:14.331789  326134 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-589368 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-589368 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:32:14.331893  326134 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:32:14.331935  326134 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:32:14.357362  326134 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:32:14.357385  326134 containerd.go:534] Images already preloaded, skipping extraction
	I1123 08:32:14.357437  326134 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:32:14.385849  326134 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:32:14.385878  326134 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:32:14.385887  326134 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1123 08:32:14.386008  326134 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-589368 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-589368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:32:14.386080  326134 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:32:14.416374  326134 cni.go:84] Creating CNI manager for ""
	I1123 08:32:14.416404  326134 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:32:14.416421  326134 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:32:14.416449  326134 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-589368 NodeName:default-k8s-diff-port-589368 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:32:14.416615  326134 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-589368"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:32:14.416688  326134 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:32:14.425909  326134 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:32:14.425995  326134 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:32:14.434360  326134 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1123 08:32:14.449035  326134 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:32:14.468244  326134 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1123 08:32:14.483002  326134 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:32:14.486912  326134 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:32:14.497851  326134 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:32:14.581410  326134 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:32:14.614517  326134 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368 for IP: 192.168.85.2
	I1123 08:32:14.614543  326134 certs.go:195] generating shared ca certs ...
	I1123 08:32:14.614563  326134 certs.go:227] acquiring lock for ca certs: {Name:mk76a9e50dc1d967f9b3db23534d451cf588eb45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:14.614740  326134 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-10922/.minikube/ca.key
	I1123 08:32:14.614805  326134 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-10922/.minikube/proxy-client-ca.key
	I1123 08:32:14.614818  326134 certs.go:257] generating profile certs ...
	I1123 08:32:14.614890  326134 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/client.key
	I1123 08:32:14.614908  326134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/client.crt with IP's: []
	I1123 08:32:14.647919  326134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/client.crt ...
	I1123 08:32:14.647953  326134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/client.crt: {Name:mkb3dce0606b4e20557ecb8120f9887326d0cf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:14.648166  326134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/client.key ...
	I1123 08:32:14.648190  326134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/client.key: {Name:mk629923008b86112793d0aec571412cc0ad28a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:14.648339  326134 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.key.6d2968c0
	I1123 08:32:14.648368  326134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.crt.6d2968c0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 08:32:14.686307  326134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.crt.6d2968c0 ...
	I1123 08:32:14.686332  326134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.crt.6d2968c0: {Name:mk1e00809095cee6e818a6e146ec68827bae6918 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:14.686524  326134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.key.6d2968c0 ...
	I1123 08:32:14.686541  326134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.key.6d2968c0: {Name:mkfb447eff51bae5aefaab05debc824049f50368 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:14.686645  326134 certs.go:382] copying /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.crt.6d2968c0 -> /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.crt
	I1123 08:32:14.686795  326134 certs.go:386] copying /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.key.6d2968c0 -> /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.key
	I1123 08:32:14.686890  326134 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/proxy-client.key
	I1123 08:32:14.686910  326134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/proxy-client.crt with IP's: []
	I1123 08:32:14.924216  326134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/proxy-client.crt ...
	I1123 08:32:14.924254  326134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/proxy-client.crt: {Name:mkb9386d64575dc8ac7523514efe858c9b7529d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:14.924454  326134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/proxy-client.key ...
	I1123 08:32:14.924476  326134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/proxy-client.key: {Name:mk8c2c1a6932591a0561992965f8ee7640d119a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:14.924758  326134 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10922/.minikube/certs/14479.pem (1338 bytes)
	W1123 08:32:14.924815  326134 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-10922/.minikube/certs/14479_empty.pem, impossibly tiny 0 bytes
	I1123 08:32:14.924829  326134 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:32:14.924868  326134 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:32:14.924905  326134 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10922/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:32:14.924936  326134 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10922/.minikube/certs/key.pem (1675 bytes)
	I1123 08:32:14.924997  326134 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10922/.minikube/files/etc/ssl/certs/144792.pem (1708 bytes)
	I1123 08:32:14.925655  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:32:14.945369  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:32:14.968340  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:32:14.988840  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:32:15.007674  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 08:32:15.029568  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:32:15.050180  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:32:15.068810  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/default-k8s-diff-port-589368/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 08:32:15.089087  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/files/etc/ssl/certs/144792.pem --> /usr/share/ca-certificates/144792.pem (1708 bytes)
	I1123 08:32:15.110582  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:32:15.133022  326134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/certs/14479.pem --> /usr/share/ca-certificates/14479.pem (1338 bytes)
	I1123 08:32:15.153948  326134 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:32:15.173239  326134 ssh_runner.go:195] Run: openssl version
	I1123 08:32:15.183240  326134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:32:15.194622  326134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:32:15.199634  326134 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:32:15.199779  326134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:32:15.258387  326134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:32:15.270123  326134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14479.pem && ln -fs /usr/share/ca-certificates/14479.pem /etc/ssl/certs/14479.pem"
	I1123 08:32:15.284696  326134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14479.pem
	I1123 08:32:15.292891  326134 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:01 /usr/share/ca-certificates/14479.pem
	I1123 08:32:15.292972  326134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14479.pem
	I1123 08:32:15.359192  326134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14479.pem /etc/ssl/certs/51391683.0"
	I1123 08:32:15.373656  326134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144792.pem && ln -fs /usr/share/ca-certificates/144792.pem /etc/ssl/certs/144792.pem"
	I1123 08:32:15.384091  326134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144792.pem
	I1123 08:32:15.388887  326134 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:01 /usr/share/ca-certificates/144792.pem
	I1123 08:32:15.388958  326134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144792.pem
	I1123 08:32:15.443323  326134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144792.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:32:15.460097  326134 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:32:15.467434  326134 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:32:15.467521  326134 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-589368 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-589368 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:32:15.467625  326134 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:32:15.467703  326134 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:32:15.516472  326134 cri.go:89] found id: ""
	I1123 08:32:15.516606  326134 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:32:15.530331  326134 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:32:15.540469  326134 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:32:15.540554  326134 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:32:15.550596  326134 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:32:15.550614  326134 kubeadm.go:158] found existing configuration files:
	
	I1123 08:32:15.550662  326134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1123 08:32:15.561320  326134 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:32:15.561383  326134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:32:15.572743  326134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1123 08:32:15.584338  326134 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:32:15.584407  326134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:32:15.595366  326134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1123 08:32:15.605467  326134 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:32:15.605547  326134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:32:15.615744  326134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1123 08:32:15.627425  326134 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:32:15.627492  326134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:32:15.638000  326134 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:32:15.688923  326134 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:32:15.688995  326134 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:32:15.720637  326134 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:32:15.720729  326134 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 08:32:15.720772  326134 kubeadm.go:319] OS: Linux
	I1123 08:32:15.720822  326134 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:32:15.721005  326134 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:32:15.721212  326134 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:32:15.721367  326134 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:32:15.721431  326134 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:32:15.721491  326134 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:32:15.721558  326134 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:32:15.721616  326134 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 08:32:15.812531  326134 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:32:15.812732  326134 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:32:15.812946  326134 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:32:15.819634  326134 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:32:12.399685  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:12.899290  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:13.399323  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:13.899054  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:14.399443  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:14.899288  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:15.399123  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:15.899731  314870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:32:15.999297  314870 kubeadm.go:1114] duration metric: took 5.199592792s to wait for elevateKubeSystemPrivileges
	I1123 08:32:15.999338  314870 kubeadm.go:403] duration metric: took 18.659838402s to StartCluster
	I1123 08:32:15.999359  314870 settings.go:142] acquiring lock: {Name:mk436e1608db541c991c29c7031bb6bf416025bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:15.999426  314870 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-10922/kubeconfig
	I1123 08:32:16.001957  314870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10922/kubeconfig: {Name:mk728060aa1e1ef3d8ab678673d9cf01ff53b55e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:32:16.002268  314870 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:32:16.002602  314870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:32:16.002804  314870 config.go:182] Loaded profile config "no-preload-073500": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:32:16.002850  314870 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:32:16.002929  314870 addons.go:70] Setting storage-provisioner=true in profile "no-preload-073500"
	I1123 08:32:16.002961  314870 addons.go:239] Setting addon storage-provisioner=true in "no-preload-073500"
	I1123 08:32:16.002980  314870 addons.go:70] Setting default-storageclass=true in profile "no-preload-073500"
	I1123 08:32:16.002987  314870 host.go:66] Checking if "no-preload-073500" exists ...
	I1123 08:32:16.003009  314870 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-073500"
	I1123 08:32:16.003383  314870 cli_runner.go:164] Run: docker container inspect no-preload-073500 --format={{.State.Status}}
	I1123 08:32:16.003539  314870 cli_runner.go:164] Run: docker container inspect no-preload-073500 --format={{.State.Status}}
	I1123 08:32:16.005225  314870 out.go:179] * Verifying Kubernetes components...
	I1123 08:32:16.006812  314870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:32:16.043915  314870 addons.go:239] Setting addon default-storageclass=true in "no-preload-073500"
	I1123 08:32:16.043964  314870 host.go:66] Checking if "no-preload-073500" exists ...
	I1123 08:32:16.044444  314870 cli_runner.go:164] Run: docker container inspect no-preload-073500 --format={{.State.Status}}
	I1123 08:32:16.044724  314870 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:32:16.048309  314870 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:32:16.048336  314870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:32:16.048406  314870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-073500
	I1123 08:32:16.071830  314870 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:32:16.071903  314870 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:32:16.071999  314870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-073500
	I1123 08:32:16.078085  314870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/no-preload-073500/id_rsa Username:docker}
	I1123 08:32:16.109699  314870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/no-preload-073500/id_rsa Username:docker}
	I1123 08:32:16.185396  314870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:32:16.227817  314870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:32:16.245006  314870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:32:16.281945  314870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:32:16.397837  314870 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1123 08:32:16.399402  314870 node_ready.go:35] waiting up to 6m0s for node "no-preload-073500" to be "Ready" ...
	I1123 08:32:16.610305  314870 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 08:32:16.611607  314870 addons.go:530] duration metric: took 608.755845ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:32:16.902546  314870 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-073500" context rescaled to 1 replicas
	I1123 08:32:15.821563  326134 out.go:252]   - Generating certificates and keys ...
	I1123 08:32:15.821663  326134 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:32:15.821733  326134 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:32:16.304138  326134 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:32:16.889295  326134 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:32:16.997677  326134 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:32:17.305154  326134 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:32:15.812955  318549 addons.go:530] duration metric: took 657.239994ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:32:16.055379  318549 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-329854" context rescaled to 1 replicas
	W1123 08:32:17.554653  318549 node_ready.go:57] node "embed-certs-329854" has "Ready":"False" status (will retry)
	I1123 08:32:17.593524  326134 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:32:17.593769  326134 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-589368 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:32:17.658683  326134 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:32:17.658875  326134 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-589368 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:32:17.804016  326134 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:32:18.000605  326134 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:32:18.144467  326134 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:32:18.144604  326134 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:32:18.405359  326134 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:32:18.470862  326134 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:32:18.624734  326134 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:32:19.017532  326134 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:32:19.456219  326134 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:32:19.456839  326134 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:32:19.461436  326134 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1123 08:32:18.403264  314870 node_ready.go:57] node "no-preload-073500" has "Ready":"False" status (will retry)
	W1123 08:32:20.902482  314870 node_ready.go:57] node "no-preload-073500" has "Ready":"False" status (will retry)
	I1123 08:32:19.463042  326134 out.go:252]   - Booting up control plane ...
	I1123 08:32:19.463149  326134 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:32:19.463581  326134 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:32:19.464364  326134 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:32:19.480615  326134 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:32:19.480783  326134 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:32:19.487901  326134 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:32:19.488132  326134 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:32:19.488205  326134 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:32:19.603838  326134 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:32:19.604046  326134 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:32:20.604471  326134 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000767222s
	I1123 08:32:20.608717  326134 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:32:20.608835  326134 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1123 08:32:20.609050  326134 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:32:20.609131  326134 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1123 08:32:20.054368  318549 node_ready.go:57] node "embed-certs-329854" has "Ready":"False" status (will retry)
	W1123 08:32:22.554636  318549 node_ready.go:57] node "embed-certs-329854" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	73dd2722107d4       56cc512116c8f       10 seconds ago      Running             busybox                   0                   409a2ee88f516       busybox                                          default
	bc64aaf15fbe5       ead0a4a53df89       16 seconds ago      Running             coredns                   0                   4ffd31fa157ed       coredns-5dd5756b68-mwh86                         kube-system
	50f0601099a49       6e38f40d628db       16 seconds ago      Running             storage-provisioner       0                   48f513718684b       storage-provisioner                              kube-system
	62d2a524a89ee       409467f978b4a       28 seconds ago      Running             kindnet-cni               0                   705647d71054a       kindnet-lcz6v                                    kube-system
	1ed5b63781114       ea1030da44aa1       32 seconds ago      Running             kube-proxy                0                   f661b855b5cdf       kube-proxy-fjlft                                 kube-system
	d923b5213e8b3       4be79c38a4bab       50 seconds ago      Running             kube-controller-manager   0                   2965fc3ede0d5       kube-controller-manager-old-k8s-version-644335   kube-system
	b8ecb78185d1c       f6f496300a2ae       50 seconds ago      Running             kube-scheduler            0                   c585031e51bcb       kube-scheduler-old-k8s-version-644335            kube-system
	45f46799be931       73deb9a3f7025       50 seconds ago      Running             etcd                      0                   8007c3b8739b4       etcd-old-k8s-version-644335                      kube-system
	24fc7d3f2b9d3       bb5e0dde9054c       50 seconds ago      Running             kube-apiserver            0                   4a573e7a3f588       kube-apiserver-old-k8s-version-644335            kube-system
	
	
	==> containerd <==
	Nov 23 08:32:08 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:08.325596043Z" level=info msg="Container bc64aaf15fbe5e159c99b0d4e5fdad1163ad532a8e3e570d86e10dd4cd4eb626: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:32:08 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:08.327403635Z" level=info msg="StartContainer for \"50f0601099a49d8b3aa7aa3969c99085cf44a469a178e8abf8471bfa986a8a68\""
	Nov 23 08:32:08 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:08.331298096Z" level=info msg="connecting to shim 50f0601099a49d8b3aa7aa3969c99085cf44a469a178e8abf8471bfa986a8a68" address="unix:///run/containerd/s/78f21c230e1fb6b22bdbb486c34576bdd8e920366f7a88552dfed7aa5f553000" protocol=ttrpc version=3
	Nov 23 08:32:08 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:08.338880612Z" level=info msg="CreateContainer within sandbox \"4ffd31fa157eddcd6f9292a8c27313d69f986601a28013ab0c135931e1cba973\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc64aaf15fbe5e159c99b0d4e5fdad1163ad532a8e3e570d86e10dd4cd4eb626\""
	Nov 23 08:32:08 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:08.339903459Z" level=info msg="StartContainer for \"bc64aaf15fbe5e159c99b0d4e5fdad1163ad532a8e3e570d86e10dd4cd4eb626\""
	Nov 23 08:32:08 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:08.341217274Z" level=info msg="connecting to shim bc64aaf15fbe5e159c99b0d4e5fdad1163ad532a8e3e570d86e10dd4cd4eb626" address="unix:///run/containerd/s/fd9ebf2d67921a967fdc9a8838443eeac59b75e77d6d68d910cd26ba2583770b" protocol=ttrpc version=3
	Nov 23 08:32:08 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:08.415217475Z" level=info msg="StartContainer for \"bc64aaf15fbe5e159c99b0d4e5fdad1163ad532a8e3e570d86e10dd4cd4eb626\" returns successfully"
	Nov 23 08:32:08 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:08.415306861Z" level=info msg="StartContainer for \"50f0601099a49d8b3aa7aa3969c99085cf44a469a178e8abf8471bfa986a8a68\" returns successfully"
	Nov 23 08:32:11 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:11.512786868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:37e84c8a-3caa-4e37-9815-c33d14d90a29,Namespace:default,Attempt:0,}"
	Nov 23 08:32:11 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:11.553418847Z" level=info msg="connecting to shim 409a2ee88f5167730944a8fd2efa0563e6a063bd420bbaa66e0d23dd170a6937" address="unix:///run/containerd/s/f305f9ec82c2152d8e0b1c2b423bc40abad24b69084e4d7c8690dd0af6413061" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:32:11 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:11.628215817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:37e84c8a-3caa-4e37-9815-c33d14d90a29,Namespace:default,Attempt:0,} returns sandbox id \"409a2ee88f5167730944a8fd2efa0563e6a063bd420bbaa66e0d23dd170a6937\""
	Nov 23 08:32:11 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:11.630165738Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.856737250Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.857753641Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396643"
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.859243509Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.861610045Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.862077455Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.231862306s"
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.862115634Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.864291954Z" level=info msg="CreateContainer within sandbox \"409a2ee88f5167730944a8fd2efa0563e6a063bd420bbaa66e0d23dd170a6937\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.872244588Z" level=info msg="Container 73dd2722107d47288be1e4b164c5af81e9227f8f9bdb14a886ef57494f460595: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.878943379Z" level=info msg="CreateContainer within sandbox \"409a2ee88f5167730944a8fd2efa0563e6a063bd420bbaa66e0d23dd170a6937\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"73dd2722107d47288be1e4b164c5af81e9227f8f9bdb14a886ef57494f460595\""
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.879607281Z" level=info msg="StartContainer for \"73dd2722107d47288be1e4b164c5af81e9227f8f9bdb14a886ef57494f460595\""
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.880573129Z" level=info msg="connecting to shim 73dd2722107d47288be1e4b164c5af81e9227f8f9bdb14a886ef57494f460595" address="unix:///run/containerd/s/f305f9ec82c2152d8e0b1c2b423bc40abad24b69084e4d7c8690dd0af6413061" protocol=ttrpc version=3
	Nov 23 08:32:13 old-k8s-version-644335 containerd[666]: time="2025-11-23T08:32:13.933418274Z" level=info msg="StartContainer for \"73dd2722107d47288be1e4b164c5af81e9227f8f9bdb14a886ef57494f460595\" returns successfully"
	Nov 23 08:32:21 old-k8s-version-644335 containerd[666]: E1123 08:32:21.312077     666 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [bc64aaf15fbe5e159c99b0d4e5fdad1163ad532a8e3e570d86e10dd4cd4eb626] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37633 - 65137 "HINFO IN 9129358495986739779.2328090660322760570. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01918351s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-644335
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-644335
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=old-k8s-version-644335
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_31_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:31:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-644335
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:32:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:32:10 +0000   Sun, 23 Nov 2025 08:31:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:32:10 +0000   Sun, 23 Nov 2025 08:31:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:32:10 +0000   Sun, 23 Nov 2025 08:31:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:32:10 +0000   Sun, 23 Nov 2025 08:32:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-644335
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                a70afb12-85c0-4a98-8e1d-33bd0981eaa5
	  Boot ID:                    5380b858-5e3f-4ab2-b78d-8704cd56a682
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-5dd5756b68-mwh86                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     33s
	  kube-system                 etcd-old-k8s-version-644335                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         45s
	  kube-system                 kindnet-lcz6v                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-old-k8s-version-644335             250m (3%)     0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-controller-manager-old-k8s-version-644335    200m (2%)     0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-proxy-fjlft                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-old-k8s-version-644335             100m (1%)     0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 32s   kube-proxy       
	  Normal  Starting                 45s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  45s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  45s   kubelet          Node old-k8s-version-644335 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s   kubelet          Node old-k8s-version-644335 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s   kubelet          Node old-k8s-version-644335 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           34s   node-controller  Node old-k8s-version-644335 event: Registered Node old-k8s-version-644335 in Controller
	  Normal  NodeReady                18s   kubelet          Node old-k8s-version-644335 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 7d 09 6f 5f 2b 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 d4 5e e6 42 49 08 06
	[ +11.373766] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 a4 f8 6b 15 37 08 06
	[  +0.013916] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 7c 96 ae 15 dc 08 06
	[ +40.470104] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 69 b6 fd a9 36 08 06
	[  +0.167388] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 d3 04 3f 4c f4 08 06
	[  +2.400864] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 01 20 fe a4 35 08 06
	[  +0.000386] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 7c 96 ae 15 dc 08 06
	[  +5.210763] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 68 6e 21 c9 1f 08 06
	[Nov23 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a c0 03 9d 77 98 08 06
	[  +0.000409] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 68 6e 21 c9 1f 08 06
	[ +19.602508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 9b 99 36 e6 f4 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e 69 b6 fd a9 36 08 06
	
	
	==> etcd [45f46799be93103538a709214de68c0cfbbf97b2984f32cac94f7d09dc881032] <==
	{"level":"info","ts":"2025-11-23T08:31:50.375808Z","caller":"traceutil/trace.go:171","msg":"trace[1731385581] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:285; }","duration":"133.686427ms","start":"2025-11-23T08:31:50.242084Z","end":"2025-11-23T08:31:50.375771Z","steps":["trace[1731385581] 'agreement among raft nodes before linearized reading'  (duration: 133.406024ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:31:50.543068Z","caller":"traceutil/trace.go:171","msg":"trace[1083647756] transaction","detail":"{read_only:false; response_revision:288; number_of_response:1; }","duration":"124.895436ms","start":"2025-11-23T08:31:50.41815Z","end":"2025-11-23T08:31:50.543045Z","steps":["trace[1083647756] 'process raft request'  (duration: 122.203936ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:06.815255Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"270.879635ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766361280699337 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-5dd5756b68-mwh86\" mod_revision:345 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-5dd5756b68-mwh86\" value_size:3753 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-5dd5756b68-mwh86\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T08:32:06.815491Z","caller":"traceutil/trace.go:171","msg":"trace[764994583] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"337.630134ms","start":"2025-11-23T08:32:06.47784Z","end":"2025-11-23T08:32:06.81547Z","steps":["trace[764994583] 'process raft request'  (duration: 337.507841ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:06.815604Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-23T08:32:06.47782Z","time spent":"337.730787ms","remote":"127.0.0.1:46012","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2776,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:373 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:2722 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"info","ts":"2025-11-23T08:32:06.815726Z","caller":"traceutil/trace.go:171","msg":"trace[653677693] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"341.506856ms","start":"2025-11-23T08:32:06.474196Z","end":"2025-11-23T08:32:06.815703Z","steps":["trace[653677693] 'process raft request'  (duration: 69.997725ms)","trace[653677693] 'compare'  (duration: 270.78569ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:32:06.815805Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-23T08:32:06.474175Z","time spent":"341.601245ms","remote":"127.0.0.1:46012","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3812,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-5dd5756b68-mwh86\" mod_revision:345 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-5dd5756b68-mwh86\" value_size:3753 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-5dd5756b68-mwh86\" > >"}
	{"level":"info","ts":"2025-11-23T08:32:06.978858Z","caller":"traceutil/trace.go:171","msg":"trace[1104013326] linearizableReadLoop","detail":"{readStateIndex:408; appliedIndex:407; }","duration":"153.807411ms","start":"2025-11-23T08:32:06.825027Z","end":"2025-11-23T08:32:06.978835Z","steps":["trace[1104013326] 'read index received'  (duration: 126.893155ms)","trace[1104013326] 'applied index is now lower than readState.Index'  (duration: 26.91341ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:32:06.978893Z","caller":"traceutil/trace.go:171","msg":"trace[1774032522] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"155.455842ms","start":"2025-11-23T08:32:06.823409Z","end":"2025-11-23T08:32:06.978865Z","steps":["trace[1774032522] 'process raft request'  (duration: 128.518156ms)","trace[1774032522] 'compare'  (duration: 26.787664ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:32:06.97905Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.337268ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T08:32:06.979151Z","caller":"traceutil/trace.go:171","msg":"trace[367819674] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:392; }","duration":"115.452675ms","start":"2025-11-23T08:32:06.863687Z","end":"2025-11-23T08:32:06.97914Z","steps":["trace[367819674] 'agreement among raft nodes before linearized reading'  (duration: 115.275981ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:06.979074Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.057518ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-mwh86\" ","response":"range_response_count:1 size:3827"}
	{"level":"info","ts":"2025-11-23T08:32:06.979238Z","caller":"traceutil/trace.go:171","msg":"trace[657616236] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-mwh86; range_end:; response_count:1; response_revision:392; }","duration":"154.225473ms","start":"2025-11-23T08:32:06.824998Z","end":"2025-11-23T08:32:06.979224Z","steps":["trace[657616236] 'agreement among raft nodes before linearized reading'  (duration: 153.963673ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:07.221098Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.176439ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766361280699345 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-5dd5756b68-mwh86\" mod_revision:389 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-5dd5756b68-mwh86\" value_size:4635 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-5dd5756b68-mwh86\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T08:32:07.221196Z","caller":"traceutil/trace.go:171","msg":"trace[10519771] linearizableReadLoop","detail":"{readStateIndex:410; appliedIndex:409; }","duration":"219.765585ms","start":"2025-11-23T08:32:07.001416Z","end":"2025-11-23T08:32:07.221181Z","steps":["trace[10519771] 'read index received'  (duration: 110.39588ms)","trace[10519771] 'applied index is now lower than readState.Index'  (duration: 109.368598ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:32:07.221414Z","caller":"traceutil/trace.go:171","msg":"trace[288254356] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"235.559152ms","start":"2025-11-23T08:32:06.985843Z","end":"2025-11-23T08:32:07.221403Z","steps":["trace[288254356] 'process raft request'  (duration: 126.008432ms)","trace[288254356] 'compare'  (duration: 109.100658ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:32:07.221637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.237552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/\" range_end:\"/registry/serviceaccounts/default0\" ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-11-23T08:32:07.221676Z","caller":"traceutil/trace.go:171","msg":"trace[2038075669] range","detail":"{range_begin:/registry/serviceaccounts/default/; range_end:/registry/serviceaccounts/default0; response_count:1; response_revision:394; }","duration":"220.283438ms","start":"2025-11-23T08:32:07.001382Z","end":"2025-11-23T08:32:07.221665Z","steps":["trace[2038075669] 'agreement among raft nodes before linearized reading'  (duration: 220.198091ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:07.221836Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.431881ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:721"}
	{"level":"info","ts":"2025-11-23T08:32:07.221869Z","caller":"traceutil/trace.go:171","msg":"trace[47038502] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:394; }","duration":"151.463285ms","start":"2025-11-23T08:32:07.070396Z","end":"2025-11-23T08:32:07.22186Z","steps":["trace[47038502] 'agreement among raft nodes before linearized reading'  (duration: 151.405582ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:07.2221Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.610453ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2025-11-23T08:32:07.222135Z","caller":"traceutil/trace.go:171","msg":"trace[324676363] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:394; }","duration":"151.648054ms","start":"2025-11-23T08:32:07.070481Z","end":"2025-11-23T08:32:07.222129Z","steps":["trace[324676363] 'agreement among raft nodes before linearized reading'  (duration: 151.58707ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.375582Z","caller":"traceutil/trace.go:171","msg":"trace[1417634899] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"124.663097ms","start":"2025-11-23T08:32:07.250901Z","end":"2025-11-23T08:32:07.375564Z","steps":["trace[1417634899] 'process raft request'  (duration: 124.509404ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:07.642408Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.527999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:8 size:41134"}
	{"level":"info","ts":"2025-11-23T08:32:07.642473Z","caller":"traceutil/trace.go:171","msg":"trace[1290776648] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:8; response_revision:395; }","duration":"164.607893ms","start":"2025-11-23T08:32:07.477852Z","end":"2025-11-23T08:32:07.64246Z","steps":["trace[1290776648] 'range keys from in-memory index tree'  (duration: 164.355272ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:32:24 up  1:14,  0 user,  load average: 5.08, 3.86, 2.49
	Linux old-k8s-version-644335 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [62d2a524a89eea1a86a495032f44dc4fcf3f295b6a37b5252e52f94b48d1d408] <==
	I1123 08:31:55.985760       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:31:55.986183       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1123 08:31:55.986889       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:31:55.986985       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:31:55.987037       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:31:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:31:56.284272       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:31:56.284302       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:31:56.284313       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:31:56.306661       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:31:56.684624       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:31:56.706097       1 metrics.go:72] Registering metrics
	I1123 08:31:56.706300       1 controller.go:711] "Syncing nftables rules"
	I1123 08:32:06.290264       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 08:32:06.290315       1 main.go:301] handling current node
	I1123 08:32:16.284771       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 08:32:16.284832       1 main.go:301] handling current node
	
	
	==> kube-apiserver [24fc7d3f2b9d331ec00ee0d24edc912d0f9231bf439464acf39ca1b352dbd9ae] <==
	I1123 08:31:35.987204       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1123 08:31:35.987463       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 08:31:35.987866       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 08:31:35.987904       1 aggregator.go:166] initial CRD sync complete...
	I1123 08:31:35.987913       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 08:31:35.987964       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:31:35.987989       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:31:35.988740       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1123 08:31:35.990046       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 08:31:36.184555       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:31:36.894578       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:31:36.898290       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:31:36.898313       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:31:37.418846       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:31:37.463855       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:31:37.604156       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:31:37.611375       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1123 08:31:37.612546       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 08:31:37.617251       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:31:37.947132       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 08:31:38.954904       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 08:31:38.975029       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:31:38.988942       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1123 08:31:51.510002       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 08:31:51.707000       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [d923b5213e8b3d8af44cbc9fa87cabbfe8d1fb7bc4713bfb51ef49ec43be859f] <==
	I1123 08:31:50.945363       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1123 08:31:50.950847       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 08:31:51.002028       1 shared_informer.go:318] Caches are synced for disruption
	I1123 08:31:51.318133       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:31:51.318170       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 08:31:51.354770       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:31:51.517923       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1123 08:31:51.718486       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fjlft"
	I1123 08:31:51.721754       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-lcz6v"
	I1123 08:31:51.821332       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-jvkkt"
	I1123 08:31:51.842261       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-mwh86"
	I1123 08:31:51.861032       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="344.408288ms"
	I1123 08:31:51.877129       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.028296ms"
	I1123 08:31:51.902023       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="24.822241ms"
	I1123 08:31:51.902333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="182.107µs"
	I1123 08:31:52.329982       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1123 08:31:52.342048       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-jvkkt"
	I1123 08:31:52.351470       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.923596ms"
	I1123 08:31:52.359652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.080621ms"
	I1123 08:31:52.360693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="563.114µs"
	I1123 08:32:06.818990       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.609µs"
	I1123 08:32:07.223787       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="119.532µs"
	I1123 08:32:09.303387       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.481025ms"
	I1123 08:32:09.303531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.899µs"
	I1123 08:32:10.919115       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [1ed5b637811145d9fff3280921703c00b63f4c3b1c52c2f4c5440f7cd29f382f] <==
	I1123 08:31:52.411015       1 server_others.go:69] "Using iptables proxy"
	I1123 08:31:52.425293       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1123 08:31:52.450105       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:31:52.453169       1 server_others.go:152] "Using iptables Proxier"
	I1123 08:31:52.453286       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 08:31:52.453303       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 08:31:52.453343       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 08:31:52.453700       1 server.go:846] "Version info" version="v1.28.0"
	I1123 08:31:52.453721       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:31:52.454600       1 config.go:97] "Starting endpoint slice config controller"
	I1123 08:31:52.454743       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 08:31:52.454776       1 config.go:188] "Starting service config controller"
	I1123 08:31:52.454782       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 08:31:52.454667       1 config.go:315] "Starting node config controller"
	I1123 08:31:52.454794       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 08:31:52.554947       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1123 08:31:52.555084       1 shared_informer.go:318] Caches are synced for service config
	I1123 08:31:52.555678       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b8ecb78185d1c1fe98dec4b47de4066108120b3e315e7bbf74d7a4cc46af1cf0] <==
	W1123 08:31:35.950545       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1123 08:31:35.950610       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1123 08:31:36.785890       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1123 08:31:36.785936       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1123 08:31:36.788876       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1123 08:31:36.788913       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1123 08:31:36.809968       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1123 08:31:36.810019       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1123 08:31:36.913744       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1123 08:31:36.913787       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1123 08:31:37.072670       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1123 08:31:37.072723       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1123 08:31:37.079728       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1123 08:31:37.079769       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1123 08:31:37.113046       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1123 08:31:37.113122       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1123 08:31:37.132201       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1123 08:31:37.132261       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1123 08:31:37.158811       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1123 08:31:37.158853       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1123 08:31:37.183654       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1123 08:31:37.183698       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1123 08:31:37.332055       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1123 08:31:37.332096       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1123 08:31:39.848190       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 08:31:50 old-k8s-version-644335 kubelet[1500]: I1123 08:31:50.851565    1500 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:31:51 old-k8s-version-644335 kubelet[1500]: I1123 08:31:51.727543    1500 topology_manager.go:215] "Topology Admit Handler" podUID="43a841de-4dd0-46a2-aae4-901399aa0515" podNamespace="kube-system" podName="kube-proxy-fjlft"
	Nov 23 08:31:51 old-k8s-version-644335 kubelet[1500]: I1123 08:31:51.732323    1500 topology_manager.go:215] "Topology Admit Handler" podUID="eac5ce99-6c74-46c9-a0c0-a595c22303e4" podNamespace="kube-system" podName="kindnet-lcz6v"
	Nov 23 08:31:51 old-k8s-version-644335 kubelet[1500]: I1123 08:31:51.758902    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43a841de-4dd0-46a2-aae4-901399aa0515-kube-proxy\") pod \"kube-proxy-fjlft\" (UID: \"43a841de-4dd0-46a2-aae4-901399aa0515\") " pod="kube-system/kube-proxy-fjlft"
	Nov 23 08:31:51 old-k8s-version-644335 kubelet[1500]: I1123 08:31:51.759010    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eac5ce99-6c74-46c9-a0c0-a595c22303e4-xtables-lock\") pod \"kindnet-lcz6v\" (UID: \"eac5ce99-6c74-46c9-a0c0-a595c22303e4\") " pod="kube-system/kindnet-lcz6v"
	Nov 23 08:31:51 old-k8s-version-644335 kubelet[1500]: I1123 08:31:51.759065    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcrlc\" (UniqueName: \"kubernetes.io/projected/eac5ce99-6c74-46c9-a0c0-a595c22303e4-kube-api-access-fcrlc\") pod \"kindnet-lcz6v\" (UID: \"eac5ce99-6c74-46c9-a0c0-a595c22303e4\") " pod="kube-system/kindnet-lcz6v"
	Nov 23 08:31:51 old-k8s-version-644335 kubelet[1500]: I1123 08:31:51.759106    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43a841de-4dd0-46a2-aae4-901399aa0515-lib-modules\") pod \"kube-proxy-fjlft\" (UID: \"43a841de-4dd0-46a2-aae4-901399aa0515\") " pod="kube-system/kube-proxy-fjlft"
	Nov 23 08:31:51 old-k8s-version-644335 kubelet[1500]: I1123 08:31:51.759135    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5k64\" (UniqueName: \"kubernetes.io/projected/43a841de-4dd0-46a2-aae4-901399aa0515-kube-api-access-m5k64\") pod \"kube-proxy-fjlft\" (UID: \"43a841de-4dd0-46a2-aae4-901399aa0515\") " pod="kube-system/kube-proxy-fjlft"
	Nov 23 08:31:51 old-k8s-version-644335 kubelet[1500]: I1123 08:31:51.759163    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eac5ce99-6c74-46c9-a0c0-a595c22303e4-lib-modules\") pod \"kindnet-lcz6v\" (UID: \"eac5ce99-6c74-46c9-a0c0-a595c22303e4\") " pod="kube-system/kindnet-lcz6v"
	Nov 23 08:31:51 old-k8s-version-644335 kubelet[1500]: I1123 08:31:51.759211    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43a841de-4dd0-46a2-aae4-901399aa0515-xtables-lock\") pod \"kube-proxy-fjlft\" (UID: \"43a841de-4dd0-46a2-aae4-901399aa0515\") " pod="kube-system/kube-proxy-fjlft"
	Nov 23 08:31:51 old-k8s-version-644335 kubelet[1500]: I1123 08:31:51.759242    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eac5ce99-6c74-46c9-a0c0-a595c22303e4-cni-cfg\") pod \"kindnet-lcz6v\" (UID: \"eac5ce99-6c74-46c9-a0c0-a595c22303e4\") " pod="kube-system/kindnet-lcz6v"
	Nov 23 08:31:56 old-k8s-version-644335 kubelet[1500]: I1123 08:31:56.239581    1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fjlft" podStartSLOduration=5.239491113 podCreationTimestamp="2025-11-23 08:31:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:31:53.233477489 +0000 UTC m=+14.309128463" watchObservedRunningTime="2025-11-23 08:31:56.239491113 +0000 UTC m=+17.315142087"
	Nov 23 08:31:56 old-k8s-version-644335 kubelet[1500]: I1123 08:31:56.239759    1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-lcz6v" podStartSLOduration=2.078949801 podCreationTimestamp="2025-11-23 08:31:51 +0000 UTC" firstStartedPulling="2025-11-23 08:31:52.421191166 +0000 UTC m=+13.496842130" lastFinishedPulling="2025-11-23 08:31:55.581967422 +0000 UTC m=+16.657618387" observedRunningTime="2025-11-23 08:31:56.239122504 +0000 UTC m=+17.314773487" watchObservedRunningTime="2025-11-23 08:31:56.239726058 +0000 UTC m=+17.315377029"
	Nov 23 08:32:06 old-k8s-version-644335 kubelet[1500]: I1123 08:32:06.386223    1500 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 23 08:32:06 old-k8s-version-644335 kubelet[1500]: I1123 08:32:06.822222    1500 topology_manager.go:215] "Topology Admit Handler" podUID="fbe4548b-bcc9-427c-afbe-4a04f65d1997" podNamespace="kube-system" podName="coredns-5dd5756b68-mwh86"
	Nov 23 08:32:06 old-k8s-version-644335 kubelet[1500]: I1123 08:32:06.825399    1500 topology_manager.go:215] "Topology Admit Handler" podUID="8bbfd059-0548-413b-bc78-b5b6446505a3" podNamespace="kube-system" podName="storage-provisioner"
	Nov 23 08:32:06 old-k8s-version-644335 kubelet[1500]: I1123 08:32:06.967962    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8bbfd059-0548-413b-bc78-b5b6446505a3-tmp\") pod \"storage-provisioner\" (UID: \"8bbfd059-0548-413b-bc78-b5b6446505a3\") " pod="kube-system/storage-provisioner"
	Nov 23 08:32:06 old-k8s-version-644335 kubelet[1500]: I1123 08:32:06.968019    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbe4548b-bcc9-427c-afbe-4a04f65d1997-config-volume\") pod \"coredns-5dd5756b68-mwh86\" (UID: \"fbe4548b-bcc9-427c-afbe-4a04f65d1997\") " pod="kube-system/coredns-5dd5756b68-mwh86"
	Nov 23 08:32:06 old-k8s-version-644335 kubelet[1500]: I1123 08:32:06.968147    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z62f\" (UniqueName: \"kubernetes.io/projected/8bbfd059-0548-413b-bc78-b5b6446505a3-kube-api-access-4z62f\") pod \"storage-provisioner\" (UID: \"8bbfd059-0548-413b-bc78-b5b6446505a3\") " pod="kube-system/storage-provisioner"
	Nov 23 08:32:06 old-k8s-version-644335 kubelet[1500]: I1123 08:32:06.968201    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr8xn\" (UniqueName: \"kubernetes.io/projected/fbe4548b-bcc9-427c-afbe-4a04f65d1997-kube-api-access-kr8xn\") pod \"coredns-5dd5756b68-mwh86\" (UID: \"fbe4548b-bcc9-427c-afbe-4a04f65d1997\") " pod="kube-system/coredns-5dd5756b68-mwh86"
	Nov 23 08:32:09 old-k8s-version-644335 kubelet[1500]: I1123 08:32:09.289054    1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-mwh86" podStartSLOduration=18.288991864 podCreationTimestamp="2025-11-23 08:31:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:09.288745026 +0000 UTC m=+30.364395999" watchObservedRunningTime="2025-11-23 08:32:09.288991864 +0000 UTC m=+30.364642833"
	Nov 23 08:32:09 old-k8s-version-644335 kubelet[1500]: I1123 08:32:09.289205    1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=17.289173734 podCreationTimestamp="2025-11-23 08:31:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:09.274686083 +0000 UTC m=+30.350337055" watchObservedRunningTime="2025-11-23 08:32:09.289173734 +0000 UTC m=+30.364824706"
	Nov 23 08:32:11 old-k8s-version-644335 kubelet[1500]: I1123 08:32:11.201560    1500 topology_manager.go:215] "Topology Admit Handler" podUID="37e84c8a-3caa-4e37-9815-c33d14d90a29" podNamespace="default" podName="busybox"
	Nov 23 08:32:11 old-k8s-version-644335 kubelet[1500]: I1123 08:32:11.395725    1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnt7m\" (UniqueName: \"kubernetes.io/projected/37e84c8a-3caa-4e37-9815-c33d14d90a29-kube-api-access-bnt7m\") pod \"busybox\" (UID: \"37e84c8a-3caa-4e37-9815-c33d14d90a29\") " pod="default/busybox"
	Nov 23 08:32:14 old-k8s-version-644335 kubelet[1500]: I1123 08:32:14.288604    1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.055523078 podCreationTimestamp="2025-11-23 08:32:11 +0000 UTC" firstStartedPulling="2025-11-23 08:32:11.629681818 +0000 UTC m=+32.705332772" lastFinishedPulling="2025-11-23 08:32:13.86254197 +0000 UTC m=+34.938192935" observedRunningTime="2025-11-23 08:32:14.288364717 +0000 UTC m=+35.364015689" watchObservedRunningTime="2025-11-23 08:32:14.288383241 +0000 UTC m=+35.364034209"
	
	
	==> storage-provisioner [50f0601099a49d8b3aa7aa3969c99085cf44a469a178e8abf8471bfa986a8a68] <==
	I1123 08:32:08.428251       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:32:08.443328       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:32:08.443473       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 08:32:08.453585       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:32:08.453709       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-644335_5327fecd-767a-4068-8265-dc7d74cde00f!
	I1123 08:32:08.453833       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2326ae7d-273a-46ce-b18b-ec889e34408f", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-644335_5327fecd-767a-4068-8265-dc7d74cde00f became leader
	I1123 08:32:08.554756       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-644335_5327fecd-767a-4068-8265-dc7d74cde00f!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-644335 -n old-k8s-version-644335
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-644335 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (14.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-329854 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0629f671-0400-4d43-ab3d-5b435bcf3b1f] Pending
helpers_test.go:352: "busybox" [0629f671-0400-4d43-ab3d-5b435bcf3b1f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0629f671-0400-4d43-ab3d-5b435bcf3b1f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00426501s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-329854 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-329854
helpers_test.go:243: (dbg) docker inspect embed-certs-329854:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "83f5cb4713ef14b2ae1f3e3b262c978d14b42502561c7950ff3b19f278ae625b",
	        "Created": "2025-11-23T08:31:50.789886741Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 320921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:31:50.860364437Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/83f5cb4713ef14b2ae1f3e3b262c978d14b42502561c7950ff3b19f278ae625b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/83f5cb4713ef14b2ae1f3e3b262c978d14b42502561c7950ff3b19f278ae625b/hostname",
	        "HostsPath": "/var/lib/docker/containers/83f5cb4713ef14b2ae1f3e3b262c978d14b42502561c7950ff3b19f278ae625b/hosts",
	        "LogPath": "/var/lib/docker/containers/83f5cb4713ef14b2ae1f3e3b262c978d14b42502561c7950ff3b19f278ae625b/83f5cb4713ef14b2ae1f3e3b262c978d14b42502561c7950ff3b19f278ae625b-json.log",
	        "Name": "/embed-certs-329854",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-329854:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-329854",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "83f5cb4713ef14b2ae1f3e3b262c978d14b42502561c7950ff3b19f278ae625b",
	                "LowerDir": "/var/lib/docker/overlay2/0388bb39a86a5452cdb69ef0c1797fc05acc0a55cf5eb6b7c0083831127c653a-init/diff:/var/lib/docker/overlay2/f8ae64c4d7d1e12e69b7d69a01d34a96c2f353aeac48a9b438b028f010c32149/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0388bb39a86a5452cdb69ef0c1797fc05acc0a55cf5eb6b7c0083831127c653a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0388bb39a86a5452cdb69ef0c1797fc05acc0a55cf5eb6b7c0083831127c653a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0388bb39a86a5452cdb69ef0c1797fc05acc0a55cf5eb6b7c0083831127c653a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-329854",
	                "Source": "/var/lib/docker/volumes/embed-certs-329854/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-329854",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-329854",
	                "name.minikube.sigs.k8s.io": "embed-certs-329854",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5e07b6d0707c62d10bc9fb3a65701ca9dcf4032f240a3e592d19a99341eb4640",
	            "SandboxKey": "/var/run/docker/netns/5e07b6d0707c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-329854": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4bf2fad4a2d5aea832f3c3335ef371bf783b79d7adfe5f72a9e7e2534707d576",
	                    "EndpointID": "e431aff39ef58672bf08f226660b561b92b250126e405446ee8466b12b7b16c7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "aa:42:27:9f:3e:24",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-329854",
	                        "83f5cb4713ef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-329854 -n embed-certs-329854
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-329854 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-329854 logs -n 25: (1.024938794s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p bridge-366757 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                               │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl cat docker --no-pager                                                                                                                                                                                               │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                   │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo docker system info                                                                                                                                                                                                            │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cri-dockerd --version                                                                                                                                                                                                         │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo containerd config dump                                                                                                                                                                                                        │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo crio config                                                                                                                                                                                                                   │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ delete  │ -p bridge-366757                                                                                                                                                                                                                                    │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:32 UTC │
	│ delete  │ -p disable-driver-mounts-900754                                                                                                                                                                                                                     │ disable-driver-mounts-900754 │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ start   │ -p default-k8s-diff-port-589368 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-589368 │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-644335 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ stop    │ -p old-k8s-version-644335 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-644335 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ start   │ -p old-k8s-version-644335 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:32:39
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:32:39.044690  334214 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:32:39.044936  334214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:32:39.044944  334214 out.go:374] Setting ErrFile to fd 2...
	I1123 08:32:39.044948  334214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:32:39.045161  334214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
	I1123 08:32:39.045639  334214 out.go:368] Setting JSON to false
	I1123 08:32:39.046982  334214 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4497,"bootTime":1763882262,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:32:39.047047  334214 start.go:143] virtualization: kvm guest
	I1123 08:32:39.049146  334214 out.go:179] * [old-k8s-version-644335] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:32:39.050496  334214 notify.go:221] Checking for updates...
	I1123 08:32:39.050526  334214 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:32:39.052898  334214 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:32:39.054570  334214 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig
	I1123 08:32:39.055701  334214 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube
	I1123 08:32:39.056898  334214 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:32:39.058305  334214 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:32:39.059876  334214 config.go:182] Loaded profile config "old-k8s-version-644335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:32:39.061465  334214 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1123 08:32:39.062491  334214 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:32:39.087411  334214 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:32:39.087536  334214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:32:39.147576  334214 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:32:39.137613264 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:32:39.147691  334214 docker.go:319] overlay module found
	I1123 08:32:39.149361  334214 out.go:179] * Using the docker driver based on existing profile
	I1123 08:32:39.150582  334214 start.go:309] selected driver: docker
	I1123 08:32:39.150595  334214 start.go:927] validating driver "docker" against &{Name:old-k8s-version-644335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-644335 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStr
ing: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:32:39.150676  334214 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:32:39.151208  334214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:32:39.210357  334214 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:32:39.19964774 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:32:39.210699  334214 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:32:39.210735  334214 cni.go:84] Creating CNI manager for ""
	I1123 08:32:39.210806  334214 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:32:39.210857  334214 start.go:353] cluster config:
	{Name:old-k8s-version-644335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-644335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:32:39.212801  334214 out.go:179] * Starting "old-k8s-version-644335" primary control-plane node in "old-k8s-version-644335" cluster
	I1123 08:32:39.213896  334214 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:32:39.214895  334214 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:32:39.216134  334214 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:32:39.216186  334214 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1123 08:32:39.216199  334214 cache.go:65] Caching tarball of preloaded images
	I1123 08:32:39.216287  334214 preload.go:238] Found /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 08:32:39.216300  334214 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1123 08:32:39.216306  334214 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:32:39.216427  334214 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/old-k8s-version-644335/config.json ...
	I1123 08:32:39.239444  334214 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:32:39.239465  334214 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:32:39.239488  334214 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:32:39.239557  334214 start.go:360] acquireMachinesLock for old-k8s-version-644335: {Name:mk2d92388f6ee555f9afab8f780d1d668db94689 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:32:39.239638  334214 start.go:364] duration metric: took 43.187µs to acquireMachinesLock for "old-k8s-version-644335"
	I1123 08:32:39.239663  334214 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:32:39.239673  334214 fix.go:54] fixHost starting: 
	I1123 08:32:39.239964  334214 cli_runner.go:164] Run: docker container inspect old-k8s-version-644335 --format={{.State.Status}}
	I1123 08:32:39.261431  334214 fix.go:112] recreateIfNeeded on old-k8s-version-644335: state=Stopped err=<nil>
	W1123 08:32:39.261471  334214 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	67bbbf97b07c4       56cc512116c8f       7 seconds ago       Running             busybox                   0                   02a7aa69ca459       busybox                                      default
	cde279b802d9a       52546a367cc9e       13 seconds ago      Running             coredns                   0                   bf385555f8c72       coredns-66bc5c9577-dw7dl                     kube-system
	8b6c8a33e3d89       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   6c955aa12c65e       storage-provisioner                          kube-system
	73a337a27bc15       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   64aedf5a5aa9b       kindnet-z7lvd                                kube-system
	32bd2712086d0       fc25172553d79       24 seconds ago      Running             kube-proxy                0                   c9f3f047d6f2c       kube-proxy-gvb9r                             kube-system
	b04fdb68dd471       c80c8dbafe7dd       35 seconds ago      Running             kube-controller-manager   0                   d3adbb493b759       kube-controller-manager-embed-certs-329854   kube-system
	5cba922395475       7dd6aaa1717ab       35 seconds ago      Running             kube-scheduler            0                   515e76b7ebcb8       kube-scheduler-embed-certs-329854            kube-system
	a6e6c3835f2dc       c3994bc696102       35 seconds ago      Running             kube-apiserver            0                   daed34440af15       kube-apiserver-embed-certs-329854            kube-system
	15b6db557f461       5f1f5298c888d       35 seconds ago      Running             etcd                      0                   0e6372038704c       etcd-embed-certs-329854                      kube-system
	
	
	==> containerd <==
	Nov 23 08:32:27 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:27.199569424Z" level=info msg="CreateContainer within sandbox \"6c955aa12c65e1f5f0185587d1c86a6c102ab6f0e7cd631e9bcec671339ddc21\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"8b6c8a33e3d89d92fd4191fb8bb09447e85c82b24a01bbb5960eb52c98e4fd1c\""
	Nov 23 08:32:27 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:27.200064724Z" level=info msg="StartContainer for \"8b6c8a33e3d89d92fd4191fb8bb09447e85c82b24a01bbb5960eb52c98e4fd1c\""
	Nov 23 08:32:27 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:27.200905499Z" level=info msg="connecting to shim 8b6c8a33e3d89d92fd4191fb8bb09447e85c82b24a01bbb5960eb52c98e4fd1c" address="unix:///run/containerd/s/f0ce09a4207fd1c8f40d825a29514a9faa04fd72abeccbcac95907227117c565" protocol=ttrpc version=3
	Nov 23 08:32:27 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:27.204497717Z" level=info msg="Container cde279b802d9a0e22229573190fe4cbcb32864bed2b1837af558faf1a9c2bff1: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:32:27 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:27.210620810Z" level=info msg="CreateContainer within sandbox \"bf385555f8c72e4ccca786d5af5bc16598cbd29824b34d74d2dbf4f554d4f340\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cde279b802d9a0e22229573190fe4cbcb32864bed2b1837af558faf1a9c2bff1\""
	Nov 23 08:32:27 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:27.211616106Z" level=info msg="StartContainer for \"cde279b802d9a0e22229573190fe4cbcb32864bed2b1837af558faf1a9c2bff1\""
	Nov 23 08:32:27 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:27.212630767Z" level=info msg="connecting to shim cde279b802d9a0e22229573190fe4cbcb32864bed2b1837af558faf1a9c2bff1" address="unix:///run/containerd/s/72991981ea7c5e1f1d97b1f59ed34f5e9b872151ceafd6e6c74536fae3edb12b" protocol=ttrpc version=3
	Nov 23 08:32:27 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:27.252335007Z" level=info msg="StartContainer for \"8b6c8a33e3d89d92fd4191fb8bb09447e85c82b24a01bbb5960eb52c98e4fd1c\" returns successfully"
	Nov 23 08:32:27 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:27.259973006Z" level=info msg="StartContainer for \"cde279b802d9a0e22229573190fe4cbcb32864bed2b1837af558faf1a9c2bff1\" returns successfully"
	Nov 23 08:32:30 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:30.538719841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:0629f671-0400-4d43-ab3d-5b435bcf3b1f,Namespace:default,Attempt:0,}"
	Nov 23 08:32:30 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:30.576384999Z" level=info msg="connecting to shim 02a7aa69ca4598ca8927a3c244bd3322aae1774291a0251c27f64a09f855b42b" address="unix:///run/containerd/s/0bcccb60a69657dc9cf126264aea667b6116d665b0cda09941bed4ca328ad957" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:32:30 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:30.657368713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:0629f671-0400-4d43-ab3d-5b435bcf3b1f,Namespace:default,Attempt:0,} returns sandbox id \"02a7aa69ca4598ca8927a3c244bd3322aae1774291a0251c27f64a09f855b42b\""
	Nov 23 08:32:30 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:30.661009637Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.942693428Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.943408934Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396642"
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.944909944Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.946956011Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.947200922Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.285942018s"
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.947237952Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.951545765Z" level=info msg="CreateContainer within sandbox \"02a7aa69ca4598ca8927a3c244bd3322aae1774291a0251c27f64a09f855b42b\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.959060274Z" level=info msg="Container 67bbbf97b07c4ce60dd1fff7948e3be427bb55e44d263efdadc3c7caf9a86cfb: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.965253297Z" level=info msg="CreateContainer within sandbox \"02a7aa69ca4598ca8927a3c244bd3322aae1774291a0251c27f64a09f855b42b\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"67bbbf97b07c4ce60dd1fff7948e3be427bb55e44d263efdadc3c7caf9a86cfb\""
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.965883922Z" level=info msg="StartContainer for \"67bbbf97b07c4ce60dd1fff7948e3be427bb55e44d263efdadc3c7caf9a86cfb\""
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.966685456Z" level=info msg="connecting to shim 67bbbf97b07c4ce60dd1fff7948e3be427bb55e44d263efdadc3c7caf9a86cfb" address="unix:///run/containerd/s/0bcccb60a69657dc9cf126264aea667b6116d665b0cda09941bed4ca328ad957" protocol=ttrpc version=3
	Nov 23 08:32:33 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:33.024237318Z" level=info msg="StartContainer for \"67bbbf97b07c4ce60dd1fff7948e3be427bb55e44d263efdadc3c7caf9a86cfb\" returns successfully"
	
	
	==> coredns [cde279b802d9a0e22229573190fe4cbcb32864bed2b1837af558faf1a9c2bff1] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36048 - 58637 "HINFO IN 7336470861756283583.6089140466787590237. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.064671584s
	
	
	==> describe nodes <==
	Name:               embed-certs-329854
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-329854
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=embed-certs-329854
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_32_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:32:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-329854
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:32:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:32:26 +0000   Sun, 23 Nov 2025 08:32:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:32:26 +0000   Sun, 23 Nov 2025 08:32:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:32:26 +0000   Sun, 23 Nov 2025 08:32:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:32:26 +0000   Sun, 23 Nov 2025 08:32:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-329854
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                1b12f56c-96ed-41b8-b155-bc9fc80bca67
	  Boot ID:                    5380b858-5e3f-4ab2-b78d-8704cd56a682
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-dw7dl                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-329854                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-z7lvd                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-embed-certs-329854             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-embed-certs-329854    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-gvb9r                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-embed-certs-329854             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  31s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node embed-certs-329854 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node embed-certs-329854 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node embed-certs-329854 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node embed-certs-329854 event: Registered Node embed-certs-329854 in Controller
	  Normal  NodeReady                14s   kubelet          Node embed-certs-329854 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 7d 09 6f 5f 2b 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 d4 5e e6 42 49 08 06
	[ +11.373766] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 a4 f8 6b 15 37 08 06
	[  +0.013916] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 7c 96 ae 15 dc 08 06
	[ +40.470104] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 69 b6 fd a9 36 08 06
	[  +0.167388] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 d3 04 3f 4c f4 08 06
	[  +2.400864] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 01 20 fe a4 35 08 06
	[  +0.000386] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 7c 96 ae 15 dc 08 06
	[  +5.210763] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 68 6e 21 c9 1f 08 06
	[Nov23 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a c0 03 9d 77 98 08 06
	[  +0.000409] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 68 6e 21 c9 1f 08 06
	[ +19.602508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 9b 99 36 e6 f4 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e 69 b6 fd a9 36 08 06
	
	
	==> etcd [15b6db557f4615f6a9f5315841cc0e76f69a6b2df661553ed8cc53a6ca5854df] <==
	{"level":"info","ts":"2025-11-23T08:32:07.817940Z","caller":"traceutil/trace.go:172","msg":"trace[759510837] transaction","detail":"{read_only:false; response_revision:29; number_of_response:1; }","duration":"155.641936ms","start":"2025-11-23T08:32:07.662293Z","end":"2025-11-23T08:32:07.817935Z","steps":["trace[759510837] 'process raft request'  (duration: 155.381549ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:07.817963Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.192501ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:32:07.817973Z","caller":"traceutil/trace.go:172","msg":"trace[469815305] transaction","detail":"{read_only:false; response_revision:23; number_of_response:1; }","duration":"160.782473ms","start":"2025-11-23T08:32:07.657184Z","end":"2025-11-23T08:32:07.817967Z","steps":["trace[469815305] 'process raft request'  (duration: 160.287671ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.818009Z","caller":"traceutil/trace.go:172","msg":"trace[1646389769] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:30; }","duration":"150.22602ms","start":"2025-11-23T08:32:07.667759Z","end":"2025-11-23T08:32:07.817985Z","steps":["trace[1646389769] 'agreement among raft nodes before linearized reading'  (duration: 150.167611ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.818046Z","caller":"traceutil/trace.go:172","msg":"trace[1370300925] transaction","detail":"{read_only:false; response_revision:25; number_of_response:1; }","duration":"156.862658ms","start":"2025-11-23T08:32:07.661175Z","end":"2025-11-23T08:32:07.818037Z","steps":["trace[1370300925] 'process raft request'  (duration: 156.398808ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:07.818212Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.780647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:32:07.818240Z","caller":"traceutil/trace.go:172","msg":"trace[1592496773] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:0; response_revision:30; }","duration":"156.815602ms","start":"2025-11-23T08:32:07.661417Z","end":"2025-11-23T08:32:07.818233Z","steps":["trace[1592496773] 'agreement among raft nodes before linearized reading'  (duration: 156.747165ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.818635Z","caller":"traceutil/trace.go:172","msg":"trace[143842581] transaction","detail":"{read_only:false; response_revision:26; number_of_response:1; }","duration":"157.044231ms","start":"2025-11-23T08:32:07.661581Z","end":"2025-11-23T08:32:07.818625Z","steps":["trace[143842581] 'process raft request'  (duration: 156.013571ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:08.139702Z","caller":"traceutil/trace.go:172","msg":"trace[826091995] transaction","detail":"{read_only:false; response_revision:40; number_of_response:1; }","duration":"230.337793ms","start":"2025-11-23T08:32:07.909346Z","end":"2025-11-23T08:32:08.139684Z","steps":["trace[826091995] 'process raft request'  (duration: 230.305114ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:08.139919Z","caller":"traceutil/trace.go:172","msg":"trace[509800730] transaction","detail":"{read_only:false; response_revision:32; number_of_response:1; }","duration":"311.142479ms","start":"2025-11-23T08:32:07.828759Z","end":"2025-11-23T08:32:08.139902Z","steps":["trace[509800730] 'process raft request'  (duration: 255.416716ms)","trace[509800730] 'compare'  (duration: 55.06443ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:32:08.140029Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:32:07.828739Z","time spent":"311.240846ms","remote":"127.0.0.1:59358","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":711,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/csinodes/embed-certs-329854\" mod_revision:0 > success:<request_put:<key:\"/registry/csinodes/embed-certs-329854\" value_size:666 >> failure:<>"}
	{"level":"info","ts":"2025-11-23T08:32:08.140157Z","caller":"traceutil/trace.go:172","msg":"trace[616308197] transaction","detail":"{read_only:false; response_revision:36; number_of_response:1; }","duration":"310.596811ms","start":"2025-11-23T08:32:07.829542Z","end":"2025-11-23T08:32:08.140139Z","steps":["trace[616308197] 'process raft request'  (duration: 309.978047ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:08.140206Z","caller":"traceutil/trace.go:172","msg":"trace[138202270] transaction","detail":"{read_only:false; response_revision:34; number_of_response:1; }","duration":"310.931177ms","start":"2025-11-23T08:32:07.829256Z","end":"2025-11-23T08:32:08.140188Z","steps":["trace[138202270] 'process raft request'  (duration: 310.183535ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:08.140250Z","caller":"traceutil/trace.go:172","msg":"trace[1317935024] transaction","detail":"{read_only:false; response_revision:37; number_of_response:1; }","duration":"310.532327ms","start":"2025-11-23T08:32:07.829708Z","end":"2025-11-23T08:32:08.140240Z","steps":["trace[1317935024] 'process raft request'  (duration: 309.85702ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:08.140277Z","caller":"traceutil/trace.go:172","msg":"trace[468149616] transaction","detail":"{read_only:false; response_revision:35; number_of_response:1; }","duration":"310.964482ms","start":"2025-11-23T08:32:07.829306Z","end":"2025-11-23T08:32:08.140271Z","steps":["trace[468149616] 'process raft request'  (duration: 310.154304ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:08.140302Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:32:07.829698Z","time spent":"310.576449ms","remote":"127.0.0.1:58496","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":350,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/namespaces/kube-node-lease\" mod_revision:0 > success:<request_put:<key:\"/registry/namespaces/kube-node-lease\" value_size:306 >> failure:<>"}
	{"level":"warn","ts":"2025-11-23T08:32:08.140307Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:32:07.829300Z","time spent":"310.994522ms","remote":"127.0.0.1:59644","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":959,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1.scheduling.k8s.io\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1.scheduling.k8s.io\" value_size:886 >> failure:<>"}
	{"level":"info","ts":"2025-11-23T08:32:08.140426Z","caller":"traceutil/trace.go:172","msg":"trace[1279238835] transaction","detail":"{read_only:false; response_revision:33; number_of_response:1; }","duration":"311.231329ms","start":"2025-11-23T08:32:07.829187Z","end":"2025-11-23T08:32:08.140418Z","steps":["trace[1279238835] 'process raft request'  (duration: 310.21164ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:08.140277Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:32:07.829248Z","time spent":"310.995336ms","remote":"127.0.0.1:59644","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":941,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1.node.k8s.io\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1.node.k8s.io\" value_size:874 >> failure:<>"}
	{"level":"warn","ts":"2025-11-23T08:32:08.140463Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:32:07.829169Z","time spent":"311.27667ms","remote":"127.0.0.1:59644","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":926,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1.policy\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1.policy\" value_size:864 >> failure:<>"}
	{"level":"info","ts":"2025-11-23T08:32:08.140528Z","caller":"traceutil/trace.go:172","msg":"trace[994918012] transaction","detail":"{read_only:false; response_revision:38; number_of_response:1; }","duration":"310.742346ms","start":"2025-11-23T08:32:07.829776Z","end":"2025-11-23T08:32:08.140518Z","steps":["trace[994918012] 'process raft request'  (duration: 309.813739ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:08.140581Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:32:07.829761Z","time spent":"310.790532ms","remote":"127.0.0.1:59436","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":703,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/prioritylevelconfigurations/workload-high\" mod_revision:0 > success:<request_put:<key:\"/registry/prioritylevelconfigurations/workload-high\" value_size:644 >> failure:<>"}
	{"level":"info","ts":"2025-11-23T08:32:08.140654Z","caller":"traceutil/trace.go:172","msg":"trace[1142020195] transaction","detail":"{read_only:false; response_revision:39; number_of_response:1; }","duration":"310.631253ms","start":"2025-11-23T08:32:07.830014Z","end":"2025-11-23T08:32:08.140645Z","steps":["trace[1142020195] 'process raft request'  (duration: 309.607869ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:08.140705Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:32:07.830003Z","time spent":"310.671593ms","remote":"127.0.0.1:59644","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":953,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1.resource.k8s.io\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1.resource.k8s.io\" value_size:882 >> failure:<>"}
	{"level":"warn","ts":"2025-11-23T08:32:08.140252Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:32:07.829521Z","time spent":"310.681736ms","remote":"127.0.0.1:58464","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3008,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" mod_revision:0 > success:<request_put:<key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" value_size:2933 >> failure:<>"}
	
	
	==> kernel <==
	 08:32:40 up  1:14,  0 user,  load average: 4.91, 3.87, 2.52
	Linux embed-certs-329854 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [73a337a27bc15287c6b44707f01e22b107ab48de6ea41edaf64df22280cb78ef] <==
	I1123 08:32:16.467989       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:32:16.468335       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 08:32:16.468486       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:32:16.468553       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:32:16.468569       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:32:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:32:16.670191       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:32:16.670213       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:32:16.670234       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:32:16.670356       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:32:17.039164       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:32:17.039199       1 metrics.go:72] Registering metrics
	I1123 08:32:17.039266       1 controller.go:711] "Syncing nftables rules"
	I1123 08:32:26.672630       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:32:26.672692       1 main.go:301] handling current node
	I1123 08:32:36.671670       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:32:36.671701       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a6e6c3835f2dc1c07cf6e4a1e658076454c06594c7cb95b824aeb6e1606245f2] <==
	I1123 08:32:07.292639       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 08:32:07.293166       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:32:07.386731       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:32:07.386809       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:07.652444       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:32:07.657962       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:07.660230       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:32:08.211524       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:32:08.226483       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:32:08.226602       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:32:08.972735       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:32:09.028586       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:32:09.103415       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:32:09.110981       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 08:32:09.113573       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:32:09.119848       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:32:09.219917       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:32:10.059716       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:32:10.070068       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:32:10.080726       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:32:15.123876       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:15.130112       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:15.276538       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 08:32:15.324997       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1123 08:32:39.344283       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:57242: use of closed network connection
	
	
	==> kube-controller-manager [b04fdb68dd4713861924d6c07a97a51e8fb3dab2edf6e97f91397f98b0a3dce8] <==
	I1123 08:32:14.220423       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 08:32:14.220432       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:32:14.220438       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:32:14.220480       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:32:14.220640       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:32:14.221106       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 08:32:14.222264       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:32:14.224579       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 08:32:14.224650       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 08:32:14.224656       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:32:14.224688       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 08:32:14.224698       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 08:32:14.224705       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 08:32:14.224840       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 08:32:14.225712       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:32:14.225730       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:32:14.225738       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:32:14.227832       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 08:32:14.227871       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:32:14.232477       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:32:14.232856       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-329854" podCIDRs=["10.244.0.0/24"]
	I1123 08:32:14.241823       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:32:14.252058       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:32:14.256280       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:32:29.171931       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [32bd2712086d092b4f307f86b779639522b1b36a579ec41bb0bd613990037183] <==
	I1123 08:32:15.958690       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:32:16.033996       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:32:16.134639       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:32:16.134690       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 08:32:16.134810       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:32:16.175807       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:32:16.175989       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:32:16.185497       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:32:16.186084       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:32:16.186223       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:32:16.188266       1 config.go:200] "Starting service config controller"
	I1123 08:32:16.188303       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:32:16.188394       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:32:16.188402       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:32:16.188497       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:32:16.188661       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:32:16.189043       1 config.go:309] "Starting node config controller"
	I1123 08:32:16.190595       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:32:16.190664       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:32:16.288531       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:32:16.288601       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:32:16.290051       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5cba92239547529a497a257f2e0d24139548a72bced64f2c7a19a661fd9f8e1a] <==
	I1123 08:32:08.164150       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:32:08.170560       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:32:08.170835       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:32:08.174537       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:32:08.170871       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1123 08:32:08.176159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:32:08.176555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:32:08.176555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:32:08.176998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:32:08.177019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:32:08.177459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:32:08.178444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 08:32:08.183292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:32:08.183412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:32:08.183561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:32:08.183671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:32:08.183848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:32:08.183944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:32:08.183991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:32:08.184050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:32:08.184251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:32:08.184332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:32:08.184385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:32:08.184551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1123 08:32:09.275221       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:32:10 embed-certs-329854 kubelet[1460]: E1123 08:32:10.967896    1460 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-embed-certs-329854\" already exists" pod="kube-system/kube-apiserver-embed-certs-329854"
	Nov 23 08:32:10 embed-certs-329854 kubelet[1460]: I1123 08:32:10.992394    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-329854" podStartSLOduration=0.992368595 podStartE2EDuration="992.368595ms" podCreationTimestamp="2025-11-23 08:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:10.979703595 +0000 UTC m=+1.150863605" watchObservedRunningTime="2025-11-23 08:32:10.992368595 +0000 UTC m=+1.163528603"
	Nov 23 08:32:10 embed-certs-329854 kubelet[1460]: I1123 08:32:10.992688    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-329854" podStartSLOduration=2.992606026 podStartE2EDuration="2.992606026s" podCreationTimestamp="2025-11-23 08:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:10.992611534 +0000 UTC m=+1.163771537" watchObservedRunningTime="2025-11-23 08:32:10.992606026 +0000 UTC m=+1.163766037"
	Nov 23 08:32:11 embed-certs-329854 kubelet[1460]: I1123 08:32:11.018870    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-329854" podStartSLOduration=1.018848278 podStartE2EDuration="1.018848278s" podCreationTimestamp="2025-11-23 08:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:11.01830295 +0000 UTC m=+1.189462961" watchObservedRunningTime="2025-11-23 08:32:11.018848278 +0000 UTC m=+1.190008289"
	Nov 23 08:32:11 embed-certs-329854 kubelet[1460]: I1123 08:32:11.019003    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-329854" podStartSLOduration=1.018993196 podStartE2EDuration="1.018993196s" podCreationTimestamp="2025-11-23 08:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:11.00381228 +0000 UTC m=+1.174972291" watchObservedRunningTime="2025-11-23 08:32:11.018993196 +0000 UTC m=+1.190153206"
	Nov 23 08:32:14 embed-certs-329854 kubelet[1460]: I1123 08:32:14.303413    1460 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:32:14 embed-certs-329854 kubelet[1460]: I1123 08:32:14.304264    1460 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:32:15 embed-certs-329854 kubelet[1460]: I1123 08:32:15.349713    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4tsm\" (UniqueName: \"kubernetes.io/projected/8681c9b5-0c36-43e3-8b0e-a1568ed824ad-kube-api-access-x4tsm\") pod \"kindnet-z7lvd\" (UID: \"8681c9b5-0c36-43e3-8b0e-a1568ed824ad\") " pod="kube-system/kindnet-z7lvd"
	Nov 23 08:32:15 embed-certs-329854 kubelet[1460]: I1123 08:32:15.350273    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8bc937e9-0210-444e-b3e2-354b7d3666e3-xtables-lock\") pod \"kube-proxy-gvb9r\" (UID: \"8bc937e9-0210-444e-b3e2-354b7d3666e3\") " pod="kube-system/kube-proxy-gvb9r"
	Nov 23 08:32:15 embed-certs-329854 kubelet[1460]: I1123 08:32:15.350977    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8bc937e9-0210-444e-b3e2-354b7d3666e3-lib-modules\") pod \"kube-proxy-gvb9r\" (UID: \"8bc937e9-0210-444e-b3e2-354b7d3666e3\") " pod="kube-system/kube-proxy-gvb9r"
	Nov 23 08:32:15 embed-certs-329854 kubelet[1460]: I1123 08:32:15.351036    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6d2c\" (UniqueName: \"kubernetes.io/projected/8bc937e9-0210-444e-b3e2-354b7d3666e3-kube-api-access-s6d2c\") pod \"kube-proxy-gvb9r\" (UID: \"8bc937e9-0210-444e-b3e2-354b7d3666e3\") " pod="kube-system/kube-proxy-gvb9r"
	Nov 23 08:32:15 embed-certs-329854 kubelet[1460]: I1123 08:32:15.351067    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8681c9b5-0c36-43e3-8b0e-a1568ed824ad-lib-modules\") pod \"kindnet-z7lvd\" (UID: \"8681c9b5-0c36-43e3-8b0e-a1568ed824ad\") " pod="kube-system/kindnet-z7lvd"
	Nov 23 08:32:15 embed-certs-329854 kubelet[1460]: I1123 08:32:15.351095    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8681c9b5-0c36-43e3-8b0e-a1568ed824ad-cni-cfg\") pod \"kindnet-z7lvd\" (UID: \"8681c9b5-0c36-43e3-8b0e-a1568ed824ad\") " pod="kube-system/kindnet-z7lvd"
	Nov 23 08:32:15 embed-certs-329854 kubelet[1460]: I1123 08:32:15.351131    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8681c9b5-0c36-43e3-8b0e-a1568ed824ad-xtables-lock\") pod \"kindnet-z7lvd\" (UID: \"8681c9b5-0c36-43e3-8b0e-a1568ed824ad\") " pod="kube-system/kindnet-z7lvd"
	Nov 23 08:32:15 embed-certs-329854 kubelet[1460]: I1123 08:32:15.351152    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8bc937e9-0210-444e-b3e2-354b7d3666e3-kube-proxy\") pod \"kube-proxy-gvb9r\" (UID: \"8bc937e9-0210-444e-b3e2-354b7d3666e3\") " pod="kube-system/kube-proxy-gvb9r"
	Nov 23 08:32:15 embed-certs-329854 kubelet[1460]: I1123 08:32:15.992092    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gvb9r" podStartSLOduration=0.992070011 podStartE2EDuration="992.070011ms" podCreationTimestamp="2025-11-23 08:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:15.99143913 +0000 UTC m=+6.162599123" watchObservedRunningTime="2025-11-23 08:32:15.992070011 +0000 UTC m=+6.163230022"
	Nov 23 08:32:22 embed-certs-329854 kubelet[1460]: I1123 08:32:22.222217    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-z7lvd" podStartSLOduration=7.222193015 podStartE2EDuration="7.222193015s" podCreationTimestamp="2025-11-23 08:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:16.989801928 +0000 UTC m=+7.160961939" watchObservedRunningTime="2025-11-23 08:32:22.222193015 +0000 UTC m=+12.393353026"
	Nov 23 08:32:26 embed-certs-329854 kubelet[1460]: I1123 08:32:26.704632    1460 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:32:26 embed-certs-329854 kubelet[1460]: I1123 08:32:26.828361    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/065cacf1-a8ac-42d8-9149-d192463789b6-tmp\") pod \"storage-provisioner\" (UID: \"065cacf1-a8ac-42d8-9149-d192463789b6\") " pod="kube-system/storage-provisioner"
	Nov 23 08:32:26 embed-certs-329854 kubelet[1460]: I1123 08:32:26.828435    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cee0eda6-8680-4432-86b2-1664cbe3772e-config-volume\") pod \"coredns-66bc5c9577-dw7dl\" (UID: \"cee0eda6-8680-4432-86b2-1664cbe3772e\") " pod="kube-system/coredns-66bc5c9577-dw7dl"
	Nov 23 08:32:26 embed-certs-329854 kubelet[1460]: I1123 08:32:26.828465    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q75sd\" (UniqueName: \"kubernetes.io/projected/cee0eda6-8680-4432-86b2-1664cbe3772e-kube-api-access-q75sd\") pod \"coredns-66bc5c9577-dw7dl\" (UID: \"cee0eda6-8680-4432-86b2-1664cbe3772e\") " pod="kube-system/coredns-66bc5c9577-dw7dl"
	Nov 23 08:32:26 embed-certs-329854 kubelet[1460]: I1123 08:32:26.828515    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4np6w\" (UniqueName: \"kubernetes.io/projected/065cacf1-a8ac-42d8-9149-d192463789b6-kube-api-access-4np6w\") pod \"storage-provisioner\" (UID: \"065cacf1-a8ac-42d8-9149-d192463789b6\") " pod="kube-system/storage-provisioner"
	Nov 23 08:32:28 embed-certs-329854 kubelet[1460]: I1123 08:32:28.017026    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dw7dl" podStartSLOduration=13.01700082 podStartE2EDuration="13.01700082s" podCreationTimestamp="2025-11-23 08:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:28.016413906 +0000 UTC m=+18.187573916" watchObservedRunningTime="2025-11-23 08:32:28.01700082 +0000 UTC m=+18.188160831"
	Nov 23 08:32:30 embed-certs-329854 kubelet[1460]: I1123 08:32:30.223577    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.223547918 podStartE2EDuration="15.223547918s" podCreationTimestamp="2025-11-23 08:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:28.04821 +0000 UTC m=+18.219370013" watchObservedRunningTime="2025-11-23 08:32:30.223547918 +0000 UTC m=+20.394707929"
	Nov 23 08:32:30 embed-certs-329854 kubelet[1460]: I1123 08:32:30.249147    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcb8t\" (UniqueName: \"kubernetes.io/projected/0629f671-0400-4d43-ab3d-5b435bcf3b1f-kube-api-access-dcb8t\") pod \"busybox\" (UID: \"0629f671-0400-4d43-ab3d-5b435bcf3b1f\") " pod="default/busybox"
	
	
	==> storage-provisioner [8b6c8a33e3d89d92fd4191fb8bb09447e85c82b24a01bbb5960eb52c98e4fd1c] <==
	I1123 08:32:27.265603       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:32:27.274046       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:32:27.274096       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:32:27.276646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:27.282222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:32:27.282447       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:32:27.282692       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-329854_240b09a8-83ab-4bc1-9ae0-9c4445b42007!
	I1123 08:32:27.282655       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"37d3940e-092c-49e7-bd2e-687f6f488dd2", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-329854_240b09a8-83ab-4bc1-9ae0-9c4445b42007 became leader
	W1123 08:32:27.285083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:27.290000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:32:27.383459       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-329854_240b09a8-83ab-4bc1-9ae0-9c4445b42007!
	W1123 08:32:29.293417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:29.300575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:31.303747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:31.308282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:33.311880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:33.316115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:35.319926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:35.324548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:37.328069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:37.334010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:39.337729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:39.343348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-329854 -n embed-certs-329854
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-329854 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-329854
helpers_test.go:243: (dbg) docker inspect embed-certs-329854:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "83f5cb4713ef14b2ae1f3e3b262c978d14b42502561c7950ff3b19f278ae625b",
	        "Created": "2025-11-23T08:31:50.789886741Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 320921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:31:50.860364437Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/83f5cb4713ef14b2ae1f3e3b262c978d14b42502561c7950ff3b19f278ae625b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/83f5cb4713ef14b2ae1f3e3b262c978d14b42502561c7950ff3b19f278ae625b/hostname",
	        "HostsPath": "/var/lib/docker/containers/83f5cb4713ef14b2ae1f3e3b262c978d14b42502561c7950ff3b19f278ae625b/hosts",
	        "LogPath": "/var/lib/docker/containers/83f5cb4713ef14b2ae1f3e3b262c978d14b42502561c7950ff3b19f278ae625b/83f5cb4713ef14b2ae1f3e3b262c978d14b42502561c7950ff3b19f278ae625b-json.log",
	        "Name": "/embed-certs-329854",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-329854:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-329854",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "83f5cb4713ef14b2ae1f3e3b262c978d14b42502561c7950ff3b19f278ae625b",
	                "LowerDir": "/var/lib/docker/overlay2/0388bb39a86a5452cdb69ef0c1797fc05acc0a55cf5eb6b7c0083831127c653a-init/diff:/var/lib/docker/overlay2/f8ae64c4d7d1e12e69b7d69a01d34a96c2f353aeac48a9b438b028f010c32149/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0388bb39a86a5452cdb69ef0c1797fc05acc0a55cf5eb6b7c0083831127c653a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0388bb39a86a5452cdb69ef0c1797fc05acc0a55cf5eb6b7c0083831127c653a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0388bb39a86a5452cdb69ef0c1797fc05acc0a55cf5eb6b7c0083831127c653a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-329854",
	                "Source": "/var/lib/docker/volumes/embed-certs-329854/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-329854",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-329854",
	                "name.minikube.sigs.k8s.io": "embed-certs-329854",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5e07b6d0707c62d10bc9fb3a65701ca9dcf4032f240a3e592d19a99341eb4640",
	            "SandboxKey": "/var/run/docker/netns/5e07b6d0707c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-329854": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4bf2fad4a2d5aea832f3c3335ef371bf783b79d7adfe5f72a9e7e2534707d576",
	                    "EndpointID": "e431aff39ef58672bf08f226660b561b92b250126e405446ee8466b12b7b16c7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "aa:42:27:9f:3e:24",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-329854",
	                        "83f5cb4713ef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-329854 -n embed-certs-329854
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-329854 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-329854 logs -n 25: (1.057198734s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p bridge-366757 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                               │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl cat docker --no-pager                                                                                                                                                                                               │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                   │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo docker system info                                                                                                                                                                                                            │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cri-dockerd --version                                                                                                                                                                                                         │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo containerd config dump                                                                                                                                                                                                        │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo crio config                                                                                                                                                                                                                   │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ delete  │ -p bridge-366757                                                                                                                                                                                                                                    │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:32 UTC │
	│ delete  │ -p disable-driver-mounts-900754                                                                                                                                                                                                                     │ disable-driver-mounts-900754 │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ start   │ -p default-k8s-diff-port-589368 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-589368 │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-644335 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ stop    │ -p old-k8s-version-644335 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-644335 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ start   │ -p old-k8s-version-644335 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:32:39
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:32:39.044690  334214 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:32:39.044936  334214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:32:39.044944  334214 out.go:374] Setting ErrFile to fd 2...
	I1123 08:32:39.044948  334214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:32:39.045161  334214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
	I1123 08:32:39.045639  334214 out.go:368] Setting JSON to false
	I1123 08:32:39.046982  334214 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4497,"bootTime":1763882262,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:32:39.047047  334214 start.go:143] virtualization: kvm guest
	I1123 08:32:39.049146  334214 out.go:179] * [old-k8s-version-644335] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:32:39.050496  334214 notify.go:221] Checking for updates...
	I1123 08:32:39.050526  334214 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:32:39.052898  334214 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:32:39.054570  334214 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig
	I1123 08:32:39.055701  334214 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube
	I1123 08:32:39.056898  334214 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:32:39.058305  334214 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:32:39.059876  334214 config.go:182] Loaded profile config "old-k8s-version-644335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:32:39.061465  334214 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1123 08:32:39.062491  334214 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:32:39.087411  334214 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:32:39.087536  334214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:32:39.147576  334214 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:32:39.137613264 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:32:39.147691  334214 docker.go:319] overlay module found
	I1123 08:32:39.149361  334214 out.go:179] * Using the docker driver based on existing profile
	I1123 08:32:39.150582  334214 start.go:309] selected driver: docker
	I1123 08:32:39.150595  334214 start.go:927] validating driver "docker" against &{Name:old-k8s-version-644335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-644335 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStr
ing: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:32:39.150676  334214 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:32:39.151208  334214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:32:39.210357  334214 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:32:39.19964774 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:32:39.210699  334214 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:32:39.210735  334214 cni.go:84] Creating CNI manager for ""
	I1123 08:32:39.210806  334214 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:32:39.210857  334214 start.go:353] cluster config:
	{Name:old-k8s-version-644335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-644335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:32:39.212801  334214 out.go:179] * Starting "old-k8s-version-644335" primary control-plane node in "old-k8s-version-644335" cluster
	I1123 08:32:39.213896  334214 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:32:39.214895  334214 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:32:39.216134  334214 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:32:39.216186  334214 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1123 08:32:39.216199  334214 cache.go:65] Caching tarball of preloaded images
	I1123 08:32:39.216287  334214 preload.go:238] Found /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 08:32:39.216300  334214 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1123 08:32:39.216306  334214 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:32:39.216427  334214 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/old-k8s-version-644335/config.json ...
	I1123 08:32:39.239444  334214 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:32:39.239465  334214 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:32:39.239488  334214 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:32:39.239557  334214 start.go:360] acquireMachinesLock for old-k8s-version-644335: {Name:mk2d92388f6ee555f9afab8f780d1d668db94689 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:32:39.239638  334214 start.go:364] duration metric: took 43.187µs to acquireMachinesLock for "old-k8s-version-644335"
	I1123 08:32:39.239663  334214 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:32:39.239673  334214 fix.go:54] fixHost starting: 
	I1123 08:32:39.239964  334214 cli_runner.go:164] Run: docker container inspect old-k8s-version-644335 --format={{.State.Status}}
	I1123 08:32:39.261431  334214 fix.go:112] recreateIfNeeded on old-k8s-version-644335: state=Stopped err=<nil>
	W1123 08:32:39.261471  334214 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	67bbbf97b07c4       56cc512116c8f       9 seconds ago       Running             busybox                   0                   02a7aa69ca459       busybox                                      default
	cde279b802d9a       52546a367cc9e       14 seconds ago      Running             coredns                   0                   bf385555f8c72       coredns-66bc5c9577-dw7dl                     kube-system
	8b6c8a33e3d89       6e38f40d628db       14 seconds ago      Running             storage-provisioner       0                   6c955aa12c65e       storage-provisioner                          kube-system
	73a337a27bc15       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   64aedf5a5aa9b       kindnet-z7lvd                                kube-system
	32bd2712086d0       fc25172553d79       26 seconds ago      Running             kube-proxy                0                   c9f3f047d6f2c       kube-proxy-gvb9r                             kube-system
	b04fdb68dd471       c80c8dbafe7dd       37 seconds ago      Running             kube-controller-manager   0                   d3adbb493b759       kube-controller-manager-embed-certs-329854   kube-system
	5cba922395475       7dd6aaa1717ab       37 seconds ago      Running             kube-scheduler            0                   515e76b7ebcb8       kube-scheduler-embed-certs-329854            kube-system
	a6e6c3835f2dc       c3994bc696102       37 seconds ago      Running             kube-apiserver            0                   daed34440af15       kube-apiserver-embed-certs-329854            kube-system
	15b6db557f461       5f1f5298c888d       37 seconds ago      Running             etcd                      0                   0e6372038704c       etcd-embed-certs-329854                      kube-system
	
	
	==> containerd <==
	Nov 23 08:32:27 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:27.199569424Z" level=info msg="CreateContainer within sandbox \"6c955aa12c65e1f5f0185587d1c86a6c102ab6f0e7cd631e9bcec671339ddc21\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"8b6c8a33e3d89d92fd4191fb8bb09447e85c82b24a01bbb5960eb52c98e4fd1c\""
	Nov 23 08:32:27 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:27.200064724Z" level=info msg="StartContainer for \"8b6c8a33e3d89d92fd4191fb8bb09447e85c82b24a01bbb5960eb52c98e4fd1c\""
	Nov 23 08:32:27 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:27.200905499Z" level=info msg="connecting to shim 8b6c8a33e3d89d92fd4191fb8bb09447e85c82b24a01bbb5960eb52c98e4fd1c" address="unix:///run/containerd/s/f0ce09a4207fd1c8f40d825a29514a9faa04fd72abeccbcac95907227117c565" protocol=ttrpc version=3
	Nov 23 08:32:27 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:27.204497717Z" level=info msg="Container cde279b802d9a0e22229573190fe4cbcb32864bed2b1837af558faf1a9c2bff1: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:32:27 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:27.210620810Z" level=info msg="CreateContainer within sandbox \"bf385555f8c72e4ccca786d5af5bc16598cbd29824b34d74d2dbf4f554d4f340\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cde279b802d9a0e22229573190fe4cbcb32864bed2b1837af558faf1a9c2bff1\""
	Nov 23 08:32:27 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:27.211616106Z" level=info msg="StartContainer for \"cde279b802d9a0e22229573190fe4cbcb32864bed2b1837af558faf1a9c2bff1\""
	Nov 23 08:32:27 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:27.212630767Z" level=info msg="connecting to shim cde279b802d9a0e22229573190fe4cbcb32864bed2b1837af558faf1a9c2bff1" address="unix:///run/containerd/s/72991981ea7c5e1f1d97b1f59ed34f5e9b872151ceafd6e6c74536fae3edb12b" protocol=ttrpc version=3
	Nov 23 08:32:27 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:27.252335007Z" level=info msg="StartContainer for \"8b6c8a33e3d89d92fd4191fb8bb09447e85c82b24a01bbb5960eb52c98e4fd1c\" returns successfully"
	Nov 23 08:32:27 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:27.259973006Z" level=info msg="StartContainer for \"cde279b802d9a0e22229573190fe4cbcb32864bed2b1837af558faf1a9c2bff1\" returns successfully"
	Nov 23 08:32:30 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:30.538719841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:0629f671-0400-4d43-ab3d-5b435bcf3b1f,Namespace:default,Attempt:0,}"
	Nov 23 08:32:30 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:30.576384999Z" level=info msg="connecting to shim 02a7aa69ca4598ca8927a3c244bd3322aae1774291a0251c27f64a09f855b42b" address="unix:///run/containerd/s/0bcccb60a69657dc9cf126264aea667b6116d665b0cda09941bed4ca328ad957" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:32:30 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:30.657368713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:0629f671-0400-4d43-ab3d-5b435bcf3b1f,Namespace:default,Attempt:0,} returns sandbox id \"02a7aa69ca4598ca8927a3c244bd3322aae1774291a0251c27f64a09f855b42b\""
	Nov 23 08:32:30 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:30.661009637Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.942693428Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.943408934Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396642"
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.944909944Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.946956011Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.947200922Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.285942018s"
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.947237952Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.951545765Z" level=info msg="CreateContainer within sandbox \"02a7aa69ca4598ca8927a3c244bd3322aae1774291a0251c27f64a09f855b42b\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.959060274Z" level=info msg="Container 67bbbf97b07c4ce60dd1fff7948e3be427bb55e44d263efdadc3c7caf9a86cfb: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.965253297Z" level=info msg="CreateContainer within sandbox \"02a7aa69ca4598ca8927a3c244bd3322aae1774291a0251c27f64a09f855b42b\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"67bbbf97b07c4ce60dd1fff7948e3be427bb55e44d263efdadc3c7caf9a86cfb\""
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.965883922Z" level=info msg="StartContainer for \"67bbbf97b07c4ce60dd1fff7948e3be427bb55e44d263efdadc3c7caf9a86cfb\""
	Nov 23 08:32:32 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:32.966685456Z" level=info msg="connecting to shim 67bbbf97b07c4ce60dd1fff7948e3be427bb55e44d263efdadc3c7caf9a86cfb" address="unix:///run/containerd/s/0bcccb60a69657dc9cf126264aea667b6116d665b0cda09941bed4ca328ad957" protocol=ttrpc version=3
	Nov 23 08:32:33 embed-certs-329854 containerd[667]: time="2025-11-23T08:32:33.024237318Z" level=info msg="StartContainer for \"67bbbf97b07c4ce60dd1fff7948e3be427bb55e44d263efdadc3c7caf9a86cfb\" returns successfully"
	
	
	==> coredns [cde279b802d9a0e22229573190fe4cbcb32864bed2b1837af558faf1a9c2bff1] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36048 - 58637 "HINFO IN 7336470861756283583.6089140466787590237. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.064671584s
	
	
	==> describe nodes <==
	Name:               embed-certs-329854
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-329854
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=embed-certs-329854
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_32_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:32:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-329854
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:32:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:32:40 +0000   Sun, 23 Nov 2025 08:32:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:32:40 +0000   Sun, 23 Nov 2025 08:32:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:32:40 +0000   Sun, 23 Nov 2025 08:32:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:32:40 +0000   Sun, 23 Nov 2025 08:32:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-329854
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                1b12f56c-96ed-41b8-b155-bc9fc80bca67
	  Boot ID:                    5380b858-5e3f-4ab2-b78d-8704cd56a682
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-dw7dl                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-embed-certs-329854                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-z7lvd                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-embed-certs-329854             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-embed-certs-329854    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-gvb9r                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-embed-certs-329854             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  33s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node embed-certs-329854 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node embed-certs-329854 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node embed-certs-329854 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node embed-certs-329854 event: Registered Node embed-certs-329854 in Controller
	  Normal  NodeReady                16s   kubelet          Node embed-certs-329854 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 7d 09 6f 5f 2b 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 d4 5e e6 42 49 08 06
	[ +11.373766] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 a4 f8 6b 15 37 08 06
	[  +0.013916] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 7c 96 ae 15 dc 08 06
	[ +40.470104] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 69 b6 fd a9 36 08 06
	[  +0.167388] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 d3 04 3f 4c f4 08 06
	[  +2.400864] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 01 20 fe a4 35 08 06
	[  +0.000386] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 7c 96 ae 15 dc 08 06
	[  +5.210763] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 68 6e 21 c9 1f 08 06
	[Nov23 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a c0 03 9d 77 98 08 06
	[  +0.000409] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 68 6e 21 c9 1f 08 06
	[ +19.602508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 9b 99 36 e6 f4 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e 69 b6 fd a9 36 08 06
	
	
	==> etcd [15b6db557f4615f6a9f5315841cc0e76f69a6b2df661553ed8cc53a6ca5854df] <==
	{"level":"info","ts":"2025-11-23T08:32:07.817940Z","caller":"traceutil/trace.go:172","msg":"trace[759510837] transaction","detail":"{read_only:false; response_revision:29; number_of_response:1; }","duration":"155.641936ms","start":"2025-11-23T08:32:07.662293Z","end":"2025-11-23T08:32:07.817935Z","steps":["trace[759510837] 'process raft request'  (duration: 155.381549ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:07.817963Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.192501ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:32:07.817973Z","caller":"traceutil/trace.go:172","msg":"trace[469815305] transaction","detail":"{read_only:false; response_revision:23; number_of_response:1; }","duration":"160.782473ms","start":"2025-11-23T08:32:07.657184Z","end":"2025-11-23T08:32:07.817967Z","steps":["trace[469815305] 'process raft request'  (duration: 160.287671ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.818009Z","caller":"traceutil/trace.go:172","msg":"trace[1646389769] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:30; }","duration":"150.22602ms","start":"2025-11-23T08:32:07.667759Z","end":"2025-11-23T08:32:07.817985Z","steps":["trace[1646389769] 'agreement among raft nodes before linearized reading'  (duration: 150.167611ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.818046Z","caller":"traceutil/trace.go:172","msg":"trace[1370300925] transaction","detail":"{read_only:false; response_revision:25; number_of_response:1; }","duration":"156.862658ms","start":"2025-11-23T08:32:07.661175Z","end":"2025-11-23T08:32:07.818037Z","steps":["trace[1370300925] 'process raft request'  (duration: 156.398808ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:07.818212Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.780647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:32:07.818240Z","caller":"traceutil/trace.go:172","msg":"trace[1592496773] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:0; response_revision:30; }","duration":"156.815602ms","start":"2025-11-23T08:32:07.661417Z","end":"2025-11-23T08:32:07.818233Z","steps":["trace[1592496773] 'agreement among raft nodes before linearized reading'  (duration: 156.747165ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.818635Z","caller":"traceutil/trace.go:172","msg":"trace[143842581] transaction","detail":"{read_only:false; response_revision:26; number_of_response:1; }","duration":"157.044231ms","start":"2025-11-23T08:32:07.661581Z","end":"2025-11-23T08:32:07.818625Z","steps":["trace[143842581] 'process raft request'  (duration: 156.013571ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:08.139702Z","caller":"traceutil/trace.go:172","msg":"trace[826091995] transaction","detail":"{read_only:false; response_revision:40; number_of_response:1; }","duration":"230.337793ms","start":"2025-11-23T08:32:07.909346Z","end":"2025-11-23T08:32:08.139684Z","steps":["trace[826091995] 'process raft request'  (duration: 230.305114ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:08.139919Z","caller":"traceutil/trace.go:172","msg":"trace[509800730] transaction","detail":"{read_only:false; response_revision:32; number_of_response:1; }","duration":"311.142479ms","start":"2025-11-23T08:32:07.828759Z","end":"2025-11-23T08:32:08.139902Z","steps":["trace[509800730] 'process raft request'  (duration: 255.416716ms)","trace[509800730] 'compare'  (duration: 55.06443ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:32:08.140029Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:32:07.828739Z","time spent":"311.240846ms","remote":"127.0.0.1:59358","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":711,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/csinodes/embed-certs-329854\" mod_revision:0 > success:<request_put:<key:\"/registry/csinodes/embed-certs-329854\" value_size:666 >> failure:<>"}
	{"level":"info","ts":"2025-11-23T08:32:08.140157Z","caller":"traceutil/trace.go:172","msg":"trace[616308197] transaction","detail":"{read_only:false; response_revision:36; number_of_response:1; }","duration":"310.596811ms","start":"2025-11-23T08:32:07.829542Z","end":"2025-11-23T08:32:08.140139Z","steps":["trace[616308197] 'process raft request'  (duration: 309.978047ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:08.140206Z","caller":"traceutil/trace.go:172","msg":"trace[138202270] transaction","detail":"{read_only:false; response_revision:34; number_of_response:1; }","duration":"310.931177ms","start":"2025-11-23T08:32:07.829256Z","end":"2025-11-23T08:32:08.140188Z","steps":["trace[138202270] 'process raft request'  (duration: 310.183535ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:08.140250Z","caller":"traceutil/trace.go:172","msg":"trace[1317935024] transaction","detail":"{read_only:false; response_revision:37; number_of_response:1; }","duration":"310.532327ms","start":"2025-11-23T08:32:07.829708Z","end":"2025-11-23T08:32:08.140240Z","steps":["trace[1317935024] 'process raft request'  (duration: 309.85702ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:08.140277Z","caller":"traceutil/trace.go:172","msg":"trace[468149616] transaction","detail":"{read_only:false; response_revision:35; number_of_response:1; }","duration":"310.964482ms","start":"2025-11-23T08:32:07.829306Z","end":"2025-11-23T08:32:08.140271Z","steps":["trace[468149616] 'process raft request'  (duration: 310.154304ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:08.140302Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:32:07.829698Z","time spent":"310.576449ms","remote":"127.0.0.1:58496","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":350,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/namespaces/kube-node-lease\" mod_revision:0 > success:<request_put:<key:\"/registry/namespaces/kube-node-lease\" value_size:306 >> failure:<>"}
	{"level":"warn","ts":"2025-11-23T08:32:08.140307Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:32:07.829300Z","time spent":"310.994522ms","remote":"127.0.0.1:59644","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":959,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1.scheduling.k8s.io\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1.scheduling.k8s.io\" value_size:886 >> failure:<>"}
	{"level":"info","ts":"2025-11-23T08:32:08.140426Z","caller":"traceutil/trace.go:172","msg":"trace[1279238835] transaction","detail":"{read_only:false; response_revision:33; number_of_response:1; }","duration":"311.231329ms","start":"2025-11-23T08:32:07.829187Z","end":"2025-11-23T08:32:08.140418Z","steps":["trace[1279238835] 'process raft request'  (duration: 310.21164ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:08.140277Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:32:07.829248Z","time spent":"310.995336ms","remote":"127.0.0.1:59644","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":941,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1.node.k8s.io\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1.node.k8s.io\" value_size:874 >> failure:<>"}
	{"level":"warn","ts":"2025-11-23T08:32:08.140463Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:32:07.829169Z","time spent":"311.27667ms","remote":"127.0.0.1:59644","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":926,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1.policy\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1.policy\" value_size:864 >> failure:<>"}
	{"level":"info","ts":"2025-11-23T08:32:08.140528Z","caller":"traceutil/trace.go:172","msg":"trace[994918012] transaction","detail":"{read_only:false; response_revision:38; number_of_response:1; }","duration":"310.742346ms","start":"2025-11-23T08:32:07.829776Z","end":"2025-11-23T08:32:08.140518Z","steps":["trace[994918012] 'process raft request'  (duration: 309.813739ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:08.140581Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:32:07.829761Z","time spent":"310.790532ms","remote":"127.0.0.1:59436","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":703,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/prioritylevelconfigurations/workload-high\" mod_revision:0 > success:<request_put:<key:\"/registry/prioritylevelconfigurations/workload-high\" value_size:644 >> failure:<>"}
	{"level":"info","ts":"2025-11-23T08:32:08.140654Z","caller":"traceutil/trace.go:172","msg":"trace[1142020195] transaction","detail":"{read_only:false; response_revision:39; number_of_response:1; }","duration":"310.631253ms","start":"2025-11-23T08:32:07.830014Z","end":"2025-11-23T08:32:08.140645Z","steps":["trace[1142020195] 'process raft request'  (duration: 309.607869ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:08.140705Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:32:07.830003Z","time spent":"310.671593ms","remote":"127.0.0.1:59644","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":953,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1.resource.k8s.io\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1.resource.k8s.io\" value_size:882 >> failure:<>"}
	{"level":"warn","ts":"2025-11-23T08:32:08.140252Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:32:07.829521Z","time spent":"310.681736ms","remote":"127.0.0.1:58464","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3008,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" mod_revision:0 > success:<request_put:<key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" value_size:2933 >> failure:<>"}
	
	
	==> kernel <==
	 08:32:42 up  1:14,  0 user,  load average: 4.91, 3.87, 2.52
	Linux embed-certs-329854 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [73a337a27bc15287c6b44707f01e22b107ab48de6ea41edaf64df22280cb78ef] <==
	I1123 08:32:16.467989       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:32:16.468335       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 08:32:16.468486       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:32:16.468553       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:32:16.468569       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:32:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:32:16.670191       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:32:16.670213       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:32:16.670234       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:32:16.670356       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:32:17.039164       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:32:17.039199       1 metrics.go:72] Registering metrics
	I1123 08:32:17.039266       1 controller.go:711] "Syncing nftables rules"
	I1123 08:32:26.672630       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:32:26.672692       1 main.go:301] handling current node
	I1123 08:32:36.671670       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:32:36.671701       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a6e6c3835f2dc1c07cf6e4a1e658076454c06594c7cb95b824aeb6e1606245f2] <==
	I1123 08:32:07.292639       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 08:32:07.293166       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:32:07.386731       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:32:07.386809       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:07.652444       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:32:07.657962       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:07.660230       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:32:08.211524       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:32:08.226483       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:32:08.226602       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:32:08.972735       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:32:09.028586       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:32:09.103415       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:32:09.110981       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 08:32:09.113573       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:32:09.119848       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:32:09.219917       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:32:10.059716       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:32:10.070068       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:32:10.080726       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:32:15.123876       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:15.130112       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:15.276538       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 08:32:15.324997       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1123 08:32:39.344283       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:57242: use of closed network connection
	
	
	==> kube-controller-manager [b04fdb68dd4713861924d6c07a97a51e8fb3dab2edf6e97f91397f98b0a3dce8] <==
	I1123 08:32:14.220423       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 08:32:14.220432       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:32:14.220438       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:32:14.220480       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:32:14.220640       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:32:14.221106       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 08:32:14.222264       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:32:14.224579       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 08:32:14.224650       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 08:32:14.224656       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:32:14.224688       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 08:32:14.224698       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 08:32:14.224705       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 08:32:14.224840       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 08:32:14.225712       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:32:14.225730       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:32:14.225738       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:32:14.227832       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 08:32:14.227871       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:32:14.232477       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:32:14.232856       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-329854" podCIDRs=["10.244.0.0/24"]
	I1123 08:32:14.241823       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:32:14.252058       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:32:14.256280       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:32:29.171931       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [32bd2712086d092b4f307f86b779639522b1b36a579ec41bb0bd613990037183] <==
	I1123 08:32:15.958690       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:32:16.033996       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:32:16.134639       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:32:16.134690       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 08:32:16.134810       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:32:16.175807       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:32:16.175989       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:32:16.185497       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:32:16.186084       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:32:16.186223       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:32:16.188266       1 config.go:200] "Starting service config controller"
	I1123 08:32:16.188303       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:32:16.188394       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:32:16.188402       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:32:16.188497       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:32:16.188661       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:32:16.189043       1 config.go:309] "Starting node config controller"
	I1123 08:32:16.190595       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:32:16.190664       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:32:16.288531       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:32:16.288601       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:32:16.290051       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5cba92239547529a497a257f2e0d24139548a72bced64f2c7a19a661fd9f8e1a] <==
	I1123 08:32:08.164150       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:32:08.170560       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:32:08.170835       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:32:08.174537       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:32:08.170871       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1123 08:32:08.176159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:32:08.176555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:32:08.176555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:32:08.176998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:32:08.177019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:32:08.177459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:32:08.178444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 08:32:08.183292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:32:08.183412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:32:08.183561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:32:08.183671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:32:08.183848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:32:08.183944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:32:08.183991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:32:08.184050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:32:08.184251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:32:08.184332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:32:08.184385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:32:08.184551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1123 08:32:09.275221       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:32:10 embed-certs-329854 kubelet[1460]: E1123 08:32:10.967896    1460 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-embed-certs-329854\" already exists" pod="kube-system/kube-apiserver-embed-certs-329854"
	Nov 23 08:32:10 embed-certs-329854 kubelet[1460]: I1123 08:32:10.992394    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-329854" podStartSLOduration=0.992368595 podStartE2EDuration="992.368595ms" podCreationTimestamp="2025-11-23 08:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:10.979703595 +0000 UTC m=+1.150863605" watchObservedRunningTime="2025-11-23 08:32:10.992368595 +0000 UTC m=+1.163528603"
	Nov 23 08:32:10 embed-certs-329854 kubelet[1460]: I1123 08:32:10.992688    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-329854" podStartSLOduration=2.992606026 podStartE2EDuration="2.992606026s" podCreationTimestamp="2025-11-23 08:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:10.992611534 +0000 UTC m=+1.163771537" watchObservedRunningTime="2025-11-23 08:32:10.992606026 +0000 UTC m=+1.163766037"
	Nov 23 08:32:11 embed-certs-329854 kubelet[1460]: I1123 08:32:11.018870    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-329854" podStartSLOduration=1.018848278 podStartE2EDuration="1.018848278s" podCreationTimestamp="2025-11-23 08:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:11.01830295 +0000 UTC m=+1.189462961" watchObservedRunningTime="2025-11-23 08:32:11.018848278 +0000 UTC m=+1.190008289"
	Nov 23 08:32:11 embed-certs-329854 kubelet[1460]: I1123 08:32:11.019003    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-329854" podStartSLOduration=1.018993196 podStartE2EDuration="1.018993196s" podCreationTimestamp="2025-11-23 08:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:11.00381228 +0000 UTC m=+1.174972291" watchObservedRunningTime="2025-11-23 08:32:11.018993196 +0000 UTC m=+1.190153206"
	Nov 23 08:32:14 embed-certs-329854 kubelet[1460]: I1123 08:32:14.303413    1460 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:32:14 embed-certs-329854 kubelet[1460]: I1123 08:32:14.304264    1460 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:32:15 embed-certs-329854 kubelet[1460]: I1123 08:32:15.349713    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4tsm\" (UniqueName: \"kubernetes.io/projected/8681c9b5-0c36-43e3-8b0e-a1568ed824ad-kube-api-access-x4tsm\") pod \"kindnet-z7lvd\" (UID: \"8681c9b5-0c36-43e3-8b0e-a1568ed824ad\") " pod="kube-system/kindnet-z7lvd"
	Nov 23 08:32:15 embed-certs-329854 kubelet[1460]: I1123 08:32:15.350273    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8bc937e9-0210-444e-b3e2-354b7d3666e3-xtables-lock\") pod \"kube-proxy-gvb9r\" (UID: \"8bc937e9-0210-444e-b3e2-354b7d3666e3\") " pod="kube-system/kube-proxy-gvb9r"
	Nov 23 08:32:15 embed-certs-329854 kubelet[1460]: I1123 08:32:15.350977    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8bc937e9-0210-444e-b3e2-354b7d3666e3-lib-modules\") pod \"kube-proxy-gvb9r\" (UID: \"8bc937e9-0210-444e-b3e2-354b7d3666e3\") " pod="kube-system/kube-proxy-gvb9r"
	Nov 23 08:32:15 embed-certs-329854 kubelet[1460]: I1123 08:32:15.351036    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6d2c\" (UniqueName: \"kubernetes.io/projected/8bc937e9-0210-444e-b3e2-354b7d3666e3-kube-api-access-s6d2c\") pod \"kube-proxy-gvb9r\" (UID: \"8bc937e9-0210-444e-b3e2-354b7d3666e3\") " pod="kube-system/kube-proxy-gvb9r"
	Nov 23 08:32:15 embed-certs-329854 kubelet[1460]: I1123 08:32:15.351067    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8681c9b5-0c36-43e3-8b0e-a1568ed824ad-lib-modules\") pod \"kindnet-z7lvd\" (UID: \"8681c9b5-0c36-43e3-8b0e-a1568ed824ad\") " pod="kube-system/kindnet-z7lvd"
	Nov 23 08:32:15 embed-certs-329854 kubelet[1460]: I1123 08:32:15.351095    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8681c9b5-0c36-43e3-8b0e-a1568ed824ad-cni-cfg\") pod \"kindnet-z7lvd\" (UID: \"8681c9b5-0c36-43e3-8b0e-a1568ed824ad\") " pod="kube-system/kindnet-z7lvd"
	Nov 23 08:32:15 embed-certs-329854 kubelet[1460]: I1123 08:32:15.351131    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8681c9b5-0c36-43e3-8b0e-a1568ed824ad-xtables-lock\") pod \"kindnet-z7lvd\" (UID: \"8681c9b5-0c36-43e3-8b0e-a1568ed824ad\") " pod="kube-system/kindnet-z7lvd"
	Nov 23 08:32:15 embed-certs-329854 kubelet[1460]: I1123 08:32:15.351152    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8bc937e9-0210-444e-b3e2-354b7d3666e3-kube-proxy\") pod \"kube-proxy-gvb9r\" (UID: \"8bc937e9-0210-444e-b3e2-354b7d3666e3\") " pod="kube-system/kube-proxy-gvb9r"
	Nov 23 08:32:15 embed-certs-329854 kubelet[1460]: I1123 08:32:15.992092    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gvb9r" podStartSLOduration=0.992070011 podStartE2EDuration="992.070011ms" podCreationTimestamp="2025-11-23 08:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:15.99143913 +0000 UTC m=+6.162599123" watchObservedRunningTime="2025-11-23 08:32:15.992070011 +0000 UTC m=+6.163230022"
	Nov 23 08:32:22 embed-certs-329854 kubelet[1460]: I1123 08:32:22.222217    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-z7lvd" podStartSLOduration=7.222193015 podStartE2EDuration="7.222193015s" podCreationTimestamp="2025-11-23 08:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:16.989801928 +0000 UTC m=+7.160961939" watchObservedRunningTime="2025-11-23 08:32:22.222193015 +0000 UTC m=+12.393353026"
	Nov 23 08:32:26 embed-certs-329854 kubelet[1460]: I1123 08:32:26.704632    1460 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:32:26 embed-certs-329854 kubelet[1460]: I1123 08:32:26.828361    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/065cacf1-a8ac-42d8-9149-d192463789b6-tmp\") pod \"storage-provisioner\" (UID: \"065cacf1-a8ac-42d8-9149-d192463789b6\") " pod="kube-system/storage-provisioner"
	Nov 23 08:32:26 embed-certs-329854 kubelet[1460]: I1123 08:32:26.828435    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cee0eda6-8680-4432-86b2-1664cbe3772e-config-volume\") pod \"coredns-66bc5c9577-dw7dl\" (UID: \"cee0eda6-8680-4432-86b2-1664cbe3772e\") " pod="kube-system/coredns-66bc5c9577-dw7dl"
	Nov 23 08:32:26 embed-certs-329854 kubelet[1460]: I1123 08:32:26.828465    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q75sd\" (UniqueName: \"kubernetes.io/projected/cee0eda6-8680-4432-86b2-1664cbe3772e-kube-api-access-q75sd\") pod \"coredns-66bc5c9577-dw7dl\" (UID: \"cee0eda6-8680-4432-86b2-1664cbe3772e\") " pod="kube-system/coredns-66bc5c9577-dw7dl"
	Nov 23 08:32:26 embed-certs-329854 kubelet[1460]: I1123 08:32:26.828515    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4np6w\" (UniqueName: \"kubernetes.io/projected/065cacf1-a8ac-42d8-9149-d192463789b6-kube-api-access-4np6w\") pod \"storage-provisioner\" (UID: \"065cacf1-a8ac-42d8-9149-d192463789b6\") " pod="kube-system/storage-provisioner"
	Nov 23 08:32:28 embed-certs-329854 kubelet[1460]: I1123 08:32:28.017026    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dw7dl" podStartSLOduration=13.01700082 podStartE2EDuration="13.01700082s" podCreationTimestamp="2025-11-23 08:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:28.016413906 +0000 UTC m=+18.187573916" watchObservedRunningTime="2025-11-23 08:32:28.01700082 +0000 UTC m=+18.188160831"
	Nov 23 08:32:30 embed-certs-329854 kubelet[1460]: I1123 08:32:30.223577    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.223547918 podStartE2EDuration="15.223547918s" podCreationTimestamp="2025-11-23 08:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:28.04821 +0000 UTC m=+18.219370013" watchObservedRunningTime="2025-11-23 08:32:30.223547918 +0000 UTC m=+20.394707929"
	Nov 23 08:32:30 embed-certs-329854 kubelet[1460]: I1123 08:32:30.249147    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcb8t\" (UniqueName: \"kubernetes.io/projected/0629f671-0400-4d43-ab3d-5b435bcf3b1f-kube-api-access-dcb8t\") pod \"busybox\" (UID: \"0629f671-0400-4d43-ab3d-5b435bcf3b1f\") " pod="default/busybox"
	
	
	==> storage-provisioner [8b6c8a33e3d89d92fd4191fb8bb09447e85c82b24a01bbb5960eb52c98e4fd1c] <==
	I1123 08:32:27.265603       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:32:27.274046       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:32:27.274096       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:32:27.276646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:27.282222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:32:27.282447       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:32:27.282692       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-329854_240b09a8-83ab-4bc1-9ae0-9c4445b42007!
	I1123 08:32:27.282655       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"37d3940e-092c-49e7-bd2e-687f6f488dd2", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-329854_240b09a8-83ab-4bc1-9ae0-9c4445b42007 became leader
	W1123 08:32:27.285083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:27.290000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:32:27.383459       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-329854_240b09a8-83ab-4bc1-9ae0-9c4445b42007!
	W1123 08:32:29.293417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:29.300575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:31.303747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:31.308282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:33.311880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:33.316115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:35.319926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:35.324548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:37.328069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:37.334010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:39.337729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:39.343348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:41.346194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:41.350356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-329854 -n embed-certs-329854
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-329854 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (12.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-073500 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6da94757-8ca2-4dd7-a188-8675c49bc42b] Pending
helpers_test.go:352: "busybox" [6da94757-8ca2-4dd7-a188-8675c49bc42b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6da94757-8ca2-4dd7-a188-8675c49bc42b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004785608s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-073500 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-073500
helpers_test.go:243: (dbg) docker inspect no-preload-073500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a2f4b0aed911ce7d94ccc1ddf46f00a87d196041b1844d342387e220d2a53c3b",
	        "Created": "2025-11-23T08:31:37.952821586Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 315635,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:31:37.996546751Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/a2f4b0aed911ce7d94ccc1ddf46f00a87d196041b1844d342387e220d2a53c3b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a2f4b0aed911ce7d94ccc1ddf46f00a87d196041b1844d342387e220d2a53c3b/hostname",
	        "HostsPath": "/var/lib/docker/containers/a2f4b0aed911ce7d94ccc1ddf46f00a87d196041b1844d342387e220d2a53c3b/hosts",
	        "LogPath": "/var/lib/docker/containers/a2f4b0aed911ce7d94ccc1ddf46f00a87d196041b1844d342387e220d2a53c3b/a2f4b0aed911ce7d94ccc1ddf46f00a87d196041b1844d342387e220d2a53c3b-json.log",
	        "Name": "/no-preload-073500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-073500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-073500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a2f4b0aed911ce7d94ccc1ddf46f00a87d196041b1844d342387e220d2a53c3b",
	                "LowerDir": "/var/lib/docker/overlay2/b19a2adb896b828848dbad36678fd3ca8e0afccf189c689b1a998394732f9972-init/diff:/var/lib/docker/overlay2/f8ae64c4d7d1e12e69b7d69a01d34a96c2f353aeac48a9b438b028f010c32149/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b19a2adb896b828848dbad36678fd3ca8e0afccf189c689b1a998394732f9972/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b19a2adb896b828848dbad36678fd3ca8e0afccf189c689b1a998394732f9972/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b19a2adb896b828848dbad36678fd3ca8e0afccf189c689b1a998394732f9972/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-073500",
	                "Source": "/var/lib/docker/volumes/no-preload-073500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-073500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-073500",
	                "name.minikube.sigs.k8s.io": "no-preload-073500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e545cc0c139262a098dd6e2c2dc420cca42c8aa6900efc08797fe460e7a9b3c6",
	            "SandboxKey": "/var/run/docker/netns/e545cc0c1392",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-073500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b6f0be6481d26f2f2cf931fb63b4eac81badf2507cbf1ef00db671fae95e6d0a",
	                    "EndpointID": "e458ec7b30cb719879fa07beb212dca4de83e9977fac67058b6b208a81f08945",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "ee:99:d1:8a:7b:a1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-073500",
	                        "a2f4b0aed911"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-073500 -n no-preload-073500
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-073500 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-073500 logs -n 25: (1.201305353s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p bridge-366757 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                               │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl cat docker --no-pager                                                                                                                                                                                               │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                   │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo docker system info                                                                                                                                                                                                            │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cri-dockerd --version                                                                                                                                                                                                         │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo containerd config dump                                                                                                                                                                                                        │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo crio config                                                                                                                                                                                                                   │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ delete  │ -p bridge-366757                                                                                                                                                                                                                                    │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:32 UTC │
	│ delete  │ -p disable-driver-mounts-900754                                                                                                                                                                                                                     │ disable-driver-mounts-900754 │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ start   │ -p default-k8s-diff-port-589368 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-589368 │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-644335 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ stop    │ -p old-k8s-version-644335 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-644335 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ start   │ -p old-k8s-version-644335 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:32:39
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:32:39.044690  334214 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:32:39.044936  334214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:32:39.044944  334214 out.go:374] Setting ErrFile to fd 2...
	I1123 08:32:39.044948  334214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:32:39.045161  334214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
	I1123 08:32:39.045639  334214 out.go:368] Setting JSON to false
	I1123 08:32:39.046982  334214 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4497,"bootTime":1763882262,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:32:39.047047  334214 start.go:143] virtualization: kvm guest
	I1123 08:32:39.049146  334214 out.go:179] * [old-k8s-version-644335] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:32:39.050496  334214 notify.go:221] Checking for updates...
	I1123 08:32:39.050526  334214 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:32:39.052898  334214 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:32:39.054570  334214 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig
	I1123 08:32:39.055701  334214 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube
	I1123 08:32:39.056898  334214 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:32:39.058305  334214 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:32:39.059876  334214 config.go:182] Loaded profile config "old-k8s-version-644335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:32:39.061465  334214 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1123 08:32:39.062491  334214 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:32:39.087411  334214 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:32:39.087536  334214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:32:39.147576  334214 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:32:39.137613264 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:32:39.147691  334214 docker.go:319] overlay module found
	I1123 08:32:39.149361  334214 out.go:179] * Using the docker driver based on existing profile
	I1123 08:32:39.150582  334214 start.go:309] selected driver: docker
	I1123 08:32:39.150595  334214 start.go:927] validating driver "docker" against &{Name:old-k8s-version-644335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-644335 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStr
ing: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:32:39.150676  334214 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:32:39.151208  334214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:32:39.210357  334214 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:32:39.19964774 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:32:39.210699  334214 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:32:39.210735  334214 cni.go:84] Creating CNI manager for ""
	I1123 08:32:39.210806  334214 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:32:39.210857  334214 start.go:353] cluster config:
	{Name:old-k8s-version-644335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-644335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:32:39.212801  334214 out.go:179] * Starting "old-k8s-version-644335" primary control-plane node in "old-k8s-version-644335" cluster
	I1123 08:32:39.213896  334214 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:32:39.214895  334214 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:32:39.216134  334214 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:32:39.216186  334214 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1123 08:32:39.216199  334214 cache.go:65] Caching tarball of preloaded images
	I1123 08:32:39.216287  334214 preload.go:238] Found /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 08:32:39.216300  334214 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1123 08:32:39.216306  334214 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:32:39.216427  334214 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/old-k8s-version-644335/config.json ...
	I1123 08:32:39.239444  334214 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:32:39.239465  334214 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:32:39.239488  334214 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:32:39.239557  334214 start.go:360] acquireMachinesLock for old-k8s-version-644335: {Name:mk2d92388f6ee555f9afab8f780d1d668db94689 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:32:39.239638  334214 start.go:364] duration metric: took 43.187µs to acquireMachinesLock for "old-k8s-version-644335"
	I1123 08:32:39.239663  334214 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:32:39.239673  334214 fix.go:54] fixHost starting: 
	I1123 08:32:39.239964  334214 cli_runner.go:164] Run: docker container inspect old-k8s-version-644335 --format={{.State.Status}}
	I1123 08:32:39.261431  334214 fix.go:112] recreateIfNeeded on old-k8s-version-644335: state=Stopped err=<nil>
	W1123 08:32:39.261471  334214 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 08:32:38.339696  326134 node_ready.go:57] node "default-k8s-diff-port-589368" has "Ready":"False" status (will retry)
	W1123 08:32:40.339931  326134 node_ready.go:57] node "default-k8s-diff-port-589368" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	59b49df94c588       56cc512116c8f       7 seconds ago       Running             busybox                   0                   1656cfa16fb3a       busybox                                     default
	d9d52668bdece       52546a367cc9e       12 seconds ago      Running             coredns                   0                   69f54518e3c1b       coredns-66bc5c9577-tckhn                    kube-system
	54a1be0953581       6e38f40d628db       12 seconds ago      Running             storage-provisioner       0                   fe525bd135965       storage-provisioner                         kube-system
	ca3eb96c3d252       409467f978b4a       23 seconds ago      Running             kindnet-cni               0                   9029b858fb87b       kindnet-5mdzd                               kube-system
	b77ff0908bf0c       fc25172553d79       26 seconds ago      Running             kube-proxy                0                   bab040b52c0a6       kube-proxy-v2r6z                            kube-system
	e1519f5508be2       7dd6aaa1717ab       38 seconds ago      Running             kube-scheduler            0                   a7c6095e8192b       kube-scheduler-no-preload-073500            kube-system
	5f32b57cfa865       c80c8dbafe7dd       38 seconds ago      Running             kube-controller-manager   0                   eaf0276d6fa0a       kube-controller-manager-no-preload-073500   kube-system
	7747c2a4bb918       5f1f5298c888d       38 seconds ago      Running             etcd                      0                   c5210c071953a       etcd-no-preload-073500                      kube-system
	1694def06ea2d       c3994bc696102       38 seconds ago      Running             kube-apiserver            0                   bf7191e82270a       kube-apiserver-no-preload-073500            kube-system
	
	
	==> containerd <==
	Nov 23 08:32:30 no-preload-073500 containerd[663]: time="2025-11-23T08:32:30.591738958Z" level=info msg="StartContainer for \"54a1be0953581fccca25c8f76cbfc64195b931deede6c54427fe729f8d7f30b8\""
	Nov 23 08:32:30 no-preload-073500 containerd[663]: time="2025-11-23T08:32:30.592791651Z" level=info msg="connecting to shim 54a1be0953581fccca25c8f76cbfc64195b931deede6c54427fe729f8d7f30b8" address="unix:///run/containerd/s/53ae53f34dc82bdd99c253de3f855bfc478e361fc15173702a9390fda674186f" protocol=ttrpc version=3
	Nov 23 08:32:30 no-preload-073500 containerd[663]: time="2025-11-23T08:32:30.597230868Z" level=info msg="CreateContainer within sandbox \"69f54518e3c1b7ad6f7ba8225da509ad0f08a5a49eadbd5bd45c757a8ad533c7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 23 08:32:30 no-preload-073500 containerd[663]: time="2025-11-23T08:32:30.605288296Z" level=info msg="Container d9d52668bdecec53ab53095253faac50afe25d451ff3890fff55b83714abfd2a: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:32:30 no-preload-073500 containerd[663]: time="2025-11-23T08:32:30.612963411Z" level=info msg="CreateContainer within sandbox \"69f54518e3c1b7ad6f7ba8225da509ad0f08a5a49eadbd5bd45c757a8ad533c7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d9d52668bdecec53ab53095253faac50afe25d451ff3890fff55b83714abfd2a\""
	Nov 23 08:32:30 no-preload-073500 containerd[663]: time="2025-11-23T08:32:30.613577649Z" level=info msg="StartContainer for \"d9d52668bdecec53ab53095253faac50afe25d451ff3890fff55b83714abfd2a\""
	Nov 23 08:32:30 no-preload-073500 containerd[663]: time="2025-11-23T08:32:30.614620684Z" level=info msg="connecting to shim d9d52668bdecec53ab53095253faac50afe25d451ff3890fff55b83714abfd2a" address="unix:///run/containerd/s/ef0025f209461d2dab235a4d5954e1c60afd90c0b85a642778936ae942690d2c" protocol=ttrpc version=3
	Nov 23 08:32:30 no-preload-073500 containerd[663]: time="2025-11-23T08:32:30.644223070Z" level=info msg="StartContainer for \"54a1be0953581fccca25c8f76cbfc64195b931deede6c54427fe729f8d7f30b8\" returns successfully"
	Nov 23 08:32:30 no-preload-073500 containerd[663]: time="2025-11-23T08:32:30.688567769Z" level=info msg="StartContainer for \"d9d52668bdecec53ab53095253faac50afe25d451ff3890fff55b83714abfd2a\" returns successfully"
	Nov 23 08:32:33 no-preload-073500 containerd[663]: time="2025-11-23T08:32:33.145021833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:6da94757-8ca2-4dd7-a188-8675c49bc42b,Namespace:default,Attempt:0,}"
	Nov 23 08:32:33 no-preload-073500 containerd[663]: time="2025-11-23T08:32:33.191995703Z" level=info msg="connecting to shim 1656cfa16fb3af9db7cea1742e8564ebceb31a39b8f24ba6c93a679a5420524c" address="unix:///run/containerd/s/db32b6cdb43b067165f08264dfaa05a57faf78a7c899956dd2d4040b1a35b422" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:32:33 no-preload-073500 containerd[663]: time="2025-11-23T08:32:33.261275010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:6da94757-8ca2-4dd7-a188-8675c49bc42b,Namespace:default,Attempt:0,} returns sandbox id \"1656cfa16fb3af9db7cea1742e8564ebceb31a39b8f24ba6c93a679a5420524c\""
	Nov 23 08:32:33 no-preload-073500 containerd[663]: time="2025-11-23T08:32:33.263174187Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.456490313Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.457204106Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396647"
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.458495742Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.461150595Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.461641259Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.198422116s"
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.461681971Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.465950907Z" level=info msg="CreateContainer within sandbox \"1656cfa16fb3af9db7cea1742e8564ebceb31a39b8f24ba6c93a679a5420524c\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.473922869Z" level=info msg="Container 59b49df94c588d92701b29808a76e4829af8aefb9785d4f7c2e32f320fe0c941: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.480001823Z" level=info msg="CreateContainer within sandbox \"1656cfa16fb3af9db7cea1742e8564ebceb31a39b8f24ba6c93a679a5420524c\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"59b49df94c588d92701b29808a76e4829af8aefb9785d4f7c2e32f320fe0c941\""
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.480714197Z" level=info msg="StartContainer for \"59b49df94c588d92701b29808a76e4829af8aefb9785d4f7c2e32f320fe0c941\""
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.481579280Z" level=info msg="connecting to shim 59b49df94c588d92701b29808a76e4829af8aefb9785d4f7c2e32f320fe0c941" address="unix:///run/containerd/s/db32b6cdb43b067165f08264dfaa05a57faf78a7c899956dd2d4040b1a35b422" protocol=ttrpc version=3
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.535084547Z" level=info msg="StartContainer for \"59b49df94c588d92701b29808a76e4829af8aefb9785d4f7c2e32f320fe0c941\" returns successfully"
	
	
	==> coredns [d9d52668bdecec53ab53095253faac50afe25d451ff3890fff55b83714abfd2a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54270 - 36156 "HINFO IN 4655859727813600607.7439009089244566345. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018318146s
	
	
	==> describe nodes <==
	Name:               no-preload-073500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-073500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=no-preload-073500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_32_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:32:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-073500
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:32:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:32:40 +0000   Sun, 23 Nov 2025 08:32:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:32:40 +0000   Sun, 23 Nov 2025 08:32:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:32:40 +0000   Sun, 23 Nov 2025 08:32:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:32:40 +0000   Sun, 23 Nov 2025 08:32:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-073500
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                fc918513-edbd-4c0e-aaa2-f8e0714cc389
	  Boot ID:                    5380b858-5e3f-4ab2-b78d-8704cd56a682
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-tckhn                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-no-preload-073500                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-5mdzd                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-073500             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-073500    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-v2r6z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-073500             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 34s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  34s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  34s   kubelet          Node no-preload-073500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s   kubelet          Node no-preload-073500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s   kubelet          Node no-preload-073500 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node no-preload-073500 event: Registered Node no-preload-073500 in Controller
	  Normal  NodeReady                13s   kubelet          Node no-preload-073500 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 7d 09 6f 5f 2b 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 d4 5e e6 42 49 08 06
	[ +11.373766] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 a4 f8 6b 15 37 08 06
	[  +0.013916] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 7c 96 ae 15 dc 08 06
	[ +40.470104] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 69 b6 fd a9 36 08 06
	[  +0.167388] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 d3 04 3f 4c f4 08 06
	[  +2.400864] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 01 20 fe a4 35 08 06
	[  +0.000386] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 7c 96 ae 15 dc 08 06
	[  +5.210763] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 68 6e 21 c9 1f 08 06
	[Nov23 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a c0 03 9d 77 98 08 06
	[  +0.000409] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 68 6e 21 c9 1f 08 06
	[ +19.602508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 9b 99 36 e6 f4 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e 69 b6 fd a9 36 08 06
	
	
	==> etcd [7747c2a4bb918ea617bd6f19c4317f791d1656c2fb70574737d1f447f22de3f9] <==
	{"level":"info","ts":"2025-11-23T08:32:07.643677Z","caller":"traceutil/trace.go:172","msg":"trace[769291562] transaction","detail":"{read_only:false; response_revision:26; number_of_response:1; }","duration":"298.874236ms","start":"2025-11-23T08:32:07.344798Z","end":"2025-11-23T08:32:07.643672Z","steps":["trace[769291562] 'process raft request'  (duration: 298.165892ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:07.644854Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:32:07.344785Z","time spent":"300.0352ms","remote":"127.0.0.1:59066","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":965,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1.coordination.k8s.io\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1.coordination.k8s.io\" value_size:890 >> failure:<>"}
	{"level":"info","ts":"2025-11-23T08:32:07.643750Z","caller":"traceutil/trace.go:172","msg":"trace[460890132] transaction","detail":"{read_only:false; response_revision:27; number_of_response:1; }","duration":"298.138118ms","start":"2025-11-23T08:32:07.345605Z","end":"2025-11-23T08:32:07.643743Z","steps":["trace[460890132] 'process raft request'  (duration: 297.391504ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:07.643839Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"275.876506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:32:07.645074Z","caller":"traceutil/trace.go:172","msg":"trace[1424263317] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:29; }","duration":"277.112844ms","start":"2025-11-23T08:32:07.367953Z","end":"2025-11-23T08:32:07.645066Z","steps":["trace[1424263317] 'agreement among raft nodes before linearized reading'  (duration: 275.859507ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.775362Z","caller":"traceutil/trace.go:172","msg":"trace[781143661] linearizableReadLoop","detail":"{readStateIndex:33; appliedIndex:33; }","duration":"124.463234ms","start":"2025-11-23T08:32:07.650872Z","end":"2025-11-23T08:32:07.775335Z","steps":["trace[781143661] 'read index received'  (duration: 124.453748ms)","trace[781143661] 'applied index is now lower than readState.Index'  (duration: 7.855µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:32:07.818465Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"167.566825ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:32:07.818549Z","caller":"traceutil/trace.go:172","msg":"trace[931802762] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:0; response_revision:29; }","duration":"167.664812ms","start":"2025-11-23T08:32:07.650868Z","end":"2025-11-23T08:32:07.818533Z","steps":["trace[931802762] 'agreement among raft nodes before linearized reading'  (duration: 124.568759ms)","trace[931802762] 'range keys from in-memory index tree'  (duration: 42.959053ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:32:07.818955Z","caller":"traceutil/trace.go:172","msg":"trace[1362876802] transaction","detail":"{read_only:false; response_revision:30; number_of_response:1; }","duration":"169.367853ms","start":"2025-11-23T08:32:07.649565Z","end":"2025-11-23T08:32:07.818933Z","steps":["trace[1362876802] 'process raft request'  (duration: 125.825774ms)","trace[1362876802] 'compare'  (duration: 43.169374ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:32:07.819144Z","caller":"traceutil/trace.go:172","msg":"trace[1525454215] transaction","detail":"{read_only:false; response_revision:32; number_of_response:1; }","duration":"167.444236ms","start":"2025-11-23T08:32:07.651690Z","end":"2025-11-23T08:32:07.819134Z","steps":["trace[1525454215] 'process raft request'  (duration: 167.323668ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.819246Z","caller":"traceutil/trace.go:172","msg":"trace[1448607295] transaction","detail":"{read_only:false; response_revision:38; number_of_response:1; }","duration":"161.340814ms","start":"2025-11-23T08:32:07.657891Z","end":"2025-11-23T08:32:07.819232Z","steps":["trace[1448607295] 'process raft request'  (duration: 161.281891ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.819299Z","caller":"traceutil/trace.go:172","msg":"trace[1577299142] transaction","detail":"{read_only:false; response_revision:31; number_of_response:1; }","duration":"168.518704ms","start":"2025-11-23T08:32:07.650774Z","end":"2025-11-23T08:32:07.819292Z","steps":["trace[1577299142] 'process raft request'  (duration: 167.977439ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.819521Z","caller":"traceutil/trace.go:172","msg":"trace[143932226] transaction","detail":"{read_only:false; response_revision:33; number_of_response:1; }","duration":"166.685786ms","start":"2025-11-23T08:32:07.652791Z","end":"2025-11-23T08:32:07.819477Z","steps":["trace[143932226] 'process raft request'  (duration: 166.260296ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.819650Z","caller":"traceutil/trace.go:172","msg":"trace[1578508801] transaction","detail":"{read_only:false; response_revision:34; number_of_response:1; }","duration":"166.705911ms","start":"2025-11-23T08:32:07.652934Z","end":"2025-11-23T08:32:07.819640Z","steps":["trace[1578508801] 'process raft request'  (duration: 166.146137ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.819688Z","caller":"traceutil/trace.go:172","msg":"trace[1622845589] transaction","detail":"{read_only:false; response_revision:37; number_of_response:1; }","duration":"162.393719ms","start":"2025-11-23T08:32:07.657285Z","end":"2025-11-23T08:32:07.819679Z","steps":["trace[1622845589] 'process raft request'  (duration: 161.861299ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.819766Z","caller":"traceutil/trace.go:172","msg":"trace[783382252] transaction","detail":"{read_only:false; response_revision:35; number_of_response:1; }","duration":"164.250795ms","start":"2025-11-23T08:32:07.655496Z","end":"2025-11-23T08:32:07.819747Z","steps":["trace[783382252] 'process raft request'  (duration: 163.604629ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.819854Z","caller":"traceutil/trace.go:172","msg":"trace[1664285353] transaction","detail":"{read_only:false; response_revision:36; number_of_response:1; }","duration":"164.21887ms","start":"2025-11-23T08:32:07.655630Z","end":"2025-11-23T08:32:07.819848Z","steps":["trace[1664285353] 'process raft request'  (duration: 163.487951ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:08.139325Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"231.059336ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/admin\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:32:08.139395Z","caller":"traceutil/trace.go:172","msg":"trace[1206373715] range","detail":"{range_begin:/registry/clusterroles/admin; range_end:; response_count:0; response_revision:42; }","duration":"231.143078ms","start":"2025-11-23T08:32:07.908235Z","end":"2025-11-23T08:32:08.139378Z","steps":["trace[1206373715] 'agreement among raft nodes before linearized reading'  (duration: 28.248611ms)","trace[1206373715] 'range keys from in-memory index tree'  (duration: 202.749343ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:32:08.139427Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"202.812961ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790207116312244 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/priorityclasses/system-node-critical\" mod_revision:0 > success:<request_put:<key:\"/registry/priorityclasses/system-node-critical\" value_size:375 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-23T08:32:08.139554Z","caller":"traceutil/trace.go:172","msg":"trace[108601188] transaction","detail":"{read_only:false; response_revision:44; number_of_response:1; }","duration":"231.52373ms","start":"2025-11-23T08:32:07.908020Z","end":"2025-11-23T08:32:08.139544Z","steps":["trace[108601188] 'process raft request'  (duration: 231.457893ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:08.139639Z","caller":"traceutil/trace.go:172","msg":"trace[321238394] linearizableReadLoop","detail":"{readStateIndex:47; appliedIndex:46; }","duration":"138.949542ms","start":"2025-11-23T08:32:08.000662Z","end":"2025-11-23T08:32:08.139611Z","steps":["trace[321238394] 'read index received'  (duration: 39.646615ms)","trace[321238394] 'applied index is now lower than readState.Index'  (duration: 99.301683ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:32:08.139735Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.069951ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:32:08.139741Z","caller":"traceutil/trace.go:172","msg":"trace[1262351696] transaction","detail":"{read_only:false; response_revision:43; number_of_response:1; }","duration":"235.918521ms","start":"2025-11-23T08:32:07.903811Z","end":"2025-11-23T08:32:08.139729Z","steps":["trace[1262351696] 'process raft request'  (duration: 32.750886ms)","trace[1262351696] 'compare'  (duration: 202.704583ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:32:08.139768Z","caller":"traceutil/trace.go:172","msg":"trace[1127088331] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:44; }","duration":"139.112234ms","start":"2025-11-23T08:32:08.000649Z","end":"2025-11-23T08:32:08.139761Z","steps":["trace[1127088331] 'agreement among raft nodes before linearized reading'  (duration: 139.035395ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:32:43 up  1:15,  0 user,  load average: 4.91, 3.87, 2.52
	Linux no-preload-073500 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ca3eb96c3d252adb3057593371f40b3caf4e1228909a36a8699f2a79d6fb6cce] <==
	I1123 08:32:19.793491       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:32:19.827580       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1123 08:32:19.827755       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:32:19.827778       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:32:19.827812       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:32:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:32:20.087956       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:32:20.087998       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:32:20.088010       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:32:20.088752       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:32:20.688105       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:32:20.688135       1 metrics.go:72] Registering metrics
	I1123 08:32:20.688238       1 controller.go:711] "Syncing nftables rules"
	I1123 08:32:30.032934       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 08:32:30.032973       1 main.go:301] handling current node
	I1123 08:32:40.033840       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 08:32:40.033903       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1694def06ea2d8a155cb47af99c0d04cc3c44caf1a6329d23288f186b50ea989] <==
	I1123 08:32:06.939678       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1123 08:32:06.998672       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1123 08:32:07.000270       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1123 08:32:07.001030       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:32:07.340889       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:07.341480       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:32:07.344956       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:32:08.142162       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:32:08.153670       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:32:08.153694       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:32:08.906637       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:32:08.978387       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:32:09.111440       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:32:09.126250       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1123 08:32:09.127721       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:32:09.132650       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:32:09.866578       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:32:09.929957       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:32:09.944340       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:32:09.954747       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:32:15.275873       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:15.286353       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:15.620739       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:32:15.972606       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 08:32:41.952875       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:49666: use of closed network connection
	
	
	==> kube-controller-manager [5f32b57cfa8650937c60154398a896b081e003e9946e18a8cd77530fe24a60b9] <==
	I1123 08:32:14.866104       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 08:32:14.866191       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:32:14.866199       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:32:14.866471       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:32:14.867007       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 08:32:14.867045       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:32:14.867042       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:32:14.867117       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 08:32:14.867133       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:32:14.867152       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:32:14.868356       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:32:14.871687       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:32:14.871790       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 08:32:14.871814       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:32:14.871864       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:32:14.871890       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 08:32:14.871941       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 08:32:14.871984       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 08:32:14.871995       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 08:32:14.872367       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:32:14.878808       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-073500" podCIDRs=["10.244.0.0/24"]
	I1123 08:32:14.881730       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:32:14.884838       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:32:14.894113       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:32:34.821841       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b77ff0908bf0c0a81956c8305a20c3edf1b1187baf6a6728315a8b4103b53bae] <==
	I1123 08:32:16.640985       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:32:16.723559       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:32:16.824314       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:32:16.824421       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1123 08:32:16.824564       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:32:16.848328       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:32:16.848388       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:32:16.853720       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:32:16.854203       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:32:16.854259       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:32:16.855970       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:32:16.856008       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:32:16.855982       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:32:16.856040       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:32:16.855977       1 config.go:200] "Starting service config controller"
	I1123 08:32:16.856051       1 config.go:309] "Starting node config controller"
	I1123 08:32:16.856053       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:32:16.856064       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:32:16.856072       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:32:16.956484       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:32:16.956560       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:32:16.956528       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e1519f5508be284cdd41d5d4c9cfd97cf0032a8eed4b95badacc618119647cc2] <==
	E1123 08:32:06.903417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:32:06.903486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:32:06.903608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:32:06.903686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:32:06.903838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:32:07.740422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:32:07.826496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:32:07.858290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:32:07.898684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:32:07.904813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:32:07.906864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:32:07.911338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:32:07.971666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:32:08.000537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:32:08.069098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:32:08.086496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:32:08.121297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 08:32:08.198700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:32:08.219278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:32:08.279135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:32:08.315740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:32:08.357627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:32:08.424586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:32:08.452104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1123 08:32:10.491291       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:32:10 no-preload-073500 kubelet[2208]: E1123 08:32:10.792735    2208 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-no-preload-073500\" already exists" pod="kube-system/kube-scheduler-no-preload-073500"
	Nov 23 08:32:10 no-preload-073500 kubelet[2208]: I1123 08:32:10.809684    2208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-073500" podStartSLOduration=2.8096579090000002 podStartE2EDuration="2.809657909s" podCreationTimestamp="2025-11-23 08:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:10.809612174 +0000 UTC m=+1.142649339" watchObservedRunningTime="2025-11-23 08:32:10.809657909 +0000 UTC m=+1.142695049"
	Nov 23 08:32:10 no-preload-073500 kubelet[2208]: I1123 08:32:10.837075    2208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-073500" podStartSLOduration=1.837054575 podStartE2EDuration="1.837054575s" podCreationTimestamp="2025-11-23 08:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:10.836942669 +0000 UTC m=+1.169979819" watchObservedRunningTime="2025-11-23 08:32:10.837054575 +0000 UTC m=+1.170091710"
	Nov 23 08:32:10 no-preload-073500 kubelet[2208]: I1123 08:32:10.837195    2208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-073500" podStartSLOduration=2.837186182 podStartE2EDuration="2.837186182s" podCreationTimestamp="2025-11-23 08:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:10.824152295 +0000 UTC m=+1.157189435" watchObservedRunningTime="2025-11-23 08:32:10.837186182 +0000 UTC m=+1.170223322"
	Nov 23 08:32:10 no-preload-073500 kubelet[2208]: I1123 08:32:10.849085    2208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-073500" podStartSLOduration=1.849046651 podStartE2EDuration="1.849046651s" podCreationTimestamp="2025-11-23 08:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:10.848589542 +0000 UTC m=+1.181626704" watchObservedRunningTime="2025-11-23 08:32:10.849046651 +0000 UTC m=+1.182083791"
	Nov 23 08:32:14 no-preload-073500 kubelet[2208]: I1123 08:32:14.950016    2208 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:32:14 no-preload-073500 kubelet[2208]: I1123 08:32:14.950823    2208 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:32:16 no-preload-073500 kubelet[2208]: I1123 08:32:16.100173    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3db1b11c-8ba0-4951-a834-1a88c8abade4-xtables-lock\") pod \"kube-proxy-v2r6z\" (UID: \"3db1b11c-8ba0-4951-a834-1a88c8abade4\") " pod="kube-system/kube-proxy-v2r6z"
	Nov 23 08:32:16 no-preload-073500 kubelet[2208]: I1123 08:32:16.100706    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3db1b11c-8ba0-4951-a834-1a88c8abade4-lib-modules\") pod \"kube-proxy-v2r6z\" (UID: \"3db1b11c-8ba0-4951-a834-1a88c8abade4\") " pod="kube-system/kube-proxy-v2r6z"
	Nov 23 08:32:16 no-preload-073500 kubelet[2208]: I1123 08:32:16.100810    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7w7t\" (UniqueName: \"kubernetes.io/projected/3db1b11c-8ba0-4951-a834-1a88c8abade4-kube-api-access-g7w7t\") pod \"kube-proxy-v2r6z\" (UID: \"3db1b11c-8ba0-4951-a834-1a88c8abade4\") " pod="kube-system/kube-proxy-v2r6z"
	Nov 23 08:32:16 no-preload-073500 kubelet[2208]: I1123 08:32:16.100848    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30acd1c0-02f9-4309-b601-2173a8bf74e4-xtables-lock\") pod \"kindnet-5mdzd\" (UID: \"30acd1c0-02f9-4309-b601-2173a8bf74e4\") " pod="kube-system/kindnet-5mdzd"
	Nov 23 08:32:16 no-preload-073500 kubelet[2208]: I1123 08:32:16.100869    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30acd1c0-02f9-4309-b601-2173a8bf74e4-lib-modules\") pod \"kindnet-5mdzd\" (UID: \"30acd1c0-02f9-4309-b601-2173a8bf74e4\") " pod="kube-system/kindnet-5mdzd"
	Nov 23 08:32:16 no-preload-073500 kubelet[2208]: I1123 08:32:16.100899    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3db1b11c-8ba0-4951-a834-1a88c8abade4-kube-proxy\") pod \"kube-proxy-v2r6z\" (UID: \"3db1b11c-8ba0-4951-a834-1a88c8abade4\") " pod="kube-system/kube-proxy-v2r6z"
	Nov 23 08:32:16 no-preload-073500 kubelet[2208]: I1123 08:32:16.100920    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/30acd1c0-02f9-4309-b601-2173a8bf74e4-cni-cfg\") pod \"kindnet-5mdzd\" (UID: \"30acd1c0-02f9-4309-b601-2173a8bf74e4\") " pod="kube-system/kindnet-5mdzd"
	Nov 23 08:32:16 no-preload-073500 kubelet[2208]: I1123 08:32:16.100943    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgl6h\" (UniqueName: \"kubernetes.io/projected/30acd1c0-02f9-4309-b601-2173a8bf74e4-kube-api-access-sgl6h\") pod \"kindnet-5mdzd\" (UID: \"30acd1c0-02f9-4309-b601-2173a8bf74e4\") " pod="kube-system/kindnet-5mdzd"
	Nov 23 08:32:16 no-preload-073500 kubelet[2208]: I1123 08:32:16.816000    2208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v2r6z" podStartSLOduration=1.81597631 podStartE2EDuration="1.81597631s" podCreationTimestamp="2025-11-23 08:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:16.815895813 +0000 UTC m=+7.148932953" watchObservedRunningTime="2025-11-23 08:32:16.81597631 +0000 UTC m=+7.149013450"
	Nov 23 08:32:21 no-preload-073500 kubelet[2208]: I1123 08:32:21.802383    2208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-5mdzd" podStartSLOduration=4.016599938 podStartE2EDuration="6.802359972s" podCreationTimestamp="2025-11-23 08:32:15 +0000 UTC" firstStartedPulling="2025-11-23 08:32:16.705364226 +0000 UTC m=+7.038401360" lastFinishedPulling="2025-11-23 08:32:19.491124253 +0000 UTC m=+9.824161394" observedRunningTime="2025-11-23 08:32:19.826005402 +0000 UTC m=+10.159042554" watchObservedRunningTime="2025-11-23 08:32:21.802359972 +0000 UTC m=+12.135397113"
	Nov 23 08:32:30 no-preload-073500 kubelet[2208]: I1123 08:32:30.115990    2208 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:32:30 no-preload-073500 kubelet[2208]: I1123 08:32:30.207588    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cf8f0fc5-565a-4403-b360-e6bdc35785cf-tmp\") pod \"storage-provisioner\" (UID: \"cf8f0fc5-565a-4403-b360-e6bdc35785cf\") " pod="kube-system/storage-provisioner"
	Nov 23 08:32:30 no-preload-073500 kubelet[2208]: I1123 08:32:30.207633    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc0286b1-520a-4d92-998a-4c366264e9a1-config-volume\") pod \"coredns-66bc5c9577-tckhn\" (UID: \"cc0286b1-520a-4d92-998a-4c366264e9a1\") " pod="kube-system/coredns-66bc5c9577-tckhn"
	Nov 23 08:32:30 no-preload-073500 kubelet[2208]: I1123 08:32:30.207656    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk4mt\" (UniqueName: \"kubernetes.io/projected/cf8f0fc5-565a-4403-b360-e6bdc35785cf-kube-api-access-vk4mt\") pod \"storage-provisioner\" (UID: \"cf8f0fc5-565a-4403-b360-e6bdc35785cf\") " pod="kube-system/storage-provisioner"
	Nov 23 08:32:30 no-preload-073500 kubelet[2208]: I1123 08:32:30.207674    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzsnc\" (UniqueName: \"kubernetes.io/projected/cc0286b1-520a-4d92-998a-4c366264e9a1-kube-api-access-xzsnc\") pod \"coredns-66bc5c9577-tckhn\" (UID: \"cc0286b1-520a-4d92-998a-4c366264e9a1\") " pod="kube-system/coredns-66bc5c9577-tckhn"
	Nov 23 08:32:30 no-preload-073500 kubelet[2208]: I1123 08:32:30.852447    2208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tckhn" podStartSLOduration=14.852428191 podStartE2EDuration="14.852428191s" podCreationTimestamp="2025-11-23 08:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:30.852244113 +0000 UTC m=+21.185281250" watchObservedRunningTime="2025-11-23 08:32:30.852428191 +0000 UTC m=+21.185465333"
	Nov 23 08:32:30 no-preload-073500 kubelet[2208]: I1123 08:32:30.873881    2208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.87386041 podStartE2EDuration="14.87386041s" podCreationTimestamp="2025-11-23 08:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:30.873464003 +0000 UTC m=+21.206501155" watchObservedRunningTime="2025-11-23 08:32:30.87386041 +0000 UTC m=+21.206897551"
	Nov 23 08:32:32 no-preload-073500 kubelet[2208]: I1123 08:32:32.925736    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9fsh\" (UniqueName: \"kubernetes.io/projected/6da94757-8ca2-4dd7-a188-8675c49bc42b-kube-api-access-l9fsh\") pod \"busybox\" (UID: \"6da94757-8ca2-4dd7-a188-8675c49bc42b\") " pod="default/busybox"
	
	
	==> storage-provisioner [54a1be0953581fccca25c8f76cbfc64195b931deede6c54427fe729f8d7f30b8] <==
	I1123 08:32:30.652804       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:32:30.663901       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:32:30.663984       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:32:30.666578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:30.671753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:32:30.671976       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:32:30.672089       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"14d3c6e4-f334-41a4-9fb6-618849a813fc", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-073500_af9e803d-08e2-4d92-968c-b638ec812417 became leader
	I1123 08:32:30.672192       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-073500_af9e803d-08e2-4d92-968c-b638ec812417!
	W1123 08:32:30.677189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:30.681594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:32:30.773607       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-073500_af9e803d-08e2-4d92-968c-b638ec812417!
	W1123 08:32:32.685644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:32.693680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:34.697623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:34.703656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:36.706732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:36.711930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:38.716137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:38.721144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:40.724331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:40.728897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:42.732990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:42.739540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-073500 -n no-preload-073500
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-073500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-073500
helpers_test.go:243: (dbg) docker inspect no-preload-073500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a2f4b0aed911ce7d94ccc1ddf46f00a87d196041b1844d342387e220d2a53c3b",
	        "Created": "2025-11-23T08:31:37.952821586Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 315635,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:31:37.996546751Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/a2f4b0aed911ce7d94ccc1ddf46f00a87d196041b1844d342387e220d2a53c3b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a2f4b0aed911ce7d94ccc1ddf46f00a87d196041b1844d342387e220d2a53c3b/hostname",
	        "HostsPath": "/var/lib/docker/containers/a2f4b0aed911ce7d94ccc1ddf46f00a87d196041b1844d342387e220d2a53c3b/hosts",
	        "LogPath": "/var/lib/docker/containers/a2f4b0aed911ce7d94ccc1ddf46f00a87d196041b1844d342387e220d2a53c3b/a2f4b0aed911ce7d94ccc1ddf46f00a87d196041b1844d342387e220d2a53c3b-json.log",
	        "Name": "/no-preload-073500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-073500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-073500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a2f4b0aed911ce7d94ccc1ddf46f00a87d196041b1844d342387e220d2a53c3b",
	                "LowerDir": "/var/lib/docker/overlay2/b19a2adb896b828848dbad36678fd3ca8e0afccf189c689b1a998394732f9972-init/diff:/var/lib/docker/overlay2/f8ae64c4d7d1e12e69b7d69a01d34a96c2f353aeac48a9b438b028f010c32149/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b19a2adb896b828848dbad36678fd3ca8e0afccf189c689b1a998394732f9972/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b19a2adb896b828848dbad36678fd3ca8e0afccf189c689b1a998394732f9972/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b19a2adb896b828848dbad36678fd3ca8e0afccf189c689b1a998394732f9972/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-073500",
	                "Source": "/var/lib/docker/volumes/no-preload-073500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-073500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-073500",
	                "name.minikube.sigs.k8s.io": "no-preload-073500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e545cc0c139262a098dd6e2c2dc420cca42c8aa6900efc08797fe460e7a9b3c6",
	            "SandboxKey": "/var/run/docker/netns/e545cc0c1392",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-073500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b6f0be6481d26f2f2cf931fb63b4eac81badf2507cbf1ef00db671fae95e6d0a",
	                    "EndpointID": "e458ec7b30cb719879fa07beb212dca4de83e9977fac67058b6b208a81f08945",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "ee:99:d1:8a:7b:a1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-073500",
	                        "a2f4b0aed911"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-073500 -n no-preload-073500
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-073500 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-073500 logs -n 25: (1.147896091s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p bridge-366757 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                   │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo docker system info                                                                                                                                                                                                            │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cri-dockerd --version                                                                                                                                                                                                         │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo containerd config dump                                                                                                                                                                                                        │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo crio config                                                                                                                                                                                                                   │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ delete  │ -p bridge-366757                                                                                                                                                                                                                                    │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:32 UTC │
	│ delete  │ -p disable-driver-mounts-900754                                                                                                                                                                                                                     │ disable-driver-mounts-900754 │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ start   │ -p default-k8s-diff-port-589368 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-589368 │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-644335 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ stop    │ -p old-k8s-version-644335 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-644335 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ start   │ -p old-k8s-version-644335 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-329854 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-329854           │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ stop    │ -p embed-certs-329854 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-329854           │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:32:39
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:32:39.044690  334214 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:32:39.044936  334214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:32:39.044944  334214 out.go:374] Setting ErrFile to fd 2...
	I1123 08:32:39.044948  334214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:32:39.045161  334214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
	I1123 08:32:39.045639  334214 out.go:368] Setting JSON to false
	I1123 08:32:39.046982  334214 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4497,"bootTime":1763882262,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:32:39.047047  334214 start.go:143] virtualization: kvm guest
	I1123 08:32:39.049146  334214 out.go:179] * [old-k8s-version-644335] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:32:39.050496  334214 notify.go:221] Checking for updates...
	I1123 08:32:39.050526  334214 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:32:39.052898  334214 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:32:39.054570  334214 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig
	I1123 08:32:39.055701  334214 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube
	I1123 08:32:39.056898  334214 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:32:39.058305  334214 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:32:39.059876  334214 config.go:182] Loaded profile config "old-k8s-version-644335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:32:39.061465  334214 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1123 08:32:39.062491  334214 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:32:39.087411  334214 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:32:39.087536  334214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:32:39.147576  334214 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:32:39.137613264 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:32:39.147691  334214 docker.go:319] overlay module found
	I1123 08:32:39.149361  334214 out.go:179] * Using the docker driver based on existing profile
	I1123 08:32:39.150582  334214 start.go:309] selected driver: docker
	I1123 08:32:39.150595  334214 start.go:927] validating driver "docker" against &{Name:old-k8s-version-644335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-644335 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStr
ing: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:32:39.150676  334214 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:32:39.151208  334214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:32:39.210357  334214 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:32:39.19964774 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:32:39.210699  334214 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:32:39.210735  334214 cni.go:84] Creating CNI manager for ""
	I1123 08:32:39.210806  334214 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:32:39.210857  334214 start.go:353] cluster config:
	{Name:old-k8s-version-644335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-644335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:32:39.212801  334214 out.go:179] * Starting "old-k8s-version-644335" primary control-plane node in "old-k8s-version-644335" cluster
	I1123 08:32:39.213896  334214 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:32:39.214895  334214 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:32:39.216134  334214 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:32:39.216186  334214 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1123 08:32:39.216199  334214 cache.go:65] Caching tarball of preloaded images
	I1123 08:32:39.216287  334214 preload.go:238] Found /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 08:32:39.216300  334214 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1123 08:32:39.216306  334214 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:32:39.216427  334214 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/old-k8s-version-644335/config.json ...
	I1123 08:32:39.239444  334214 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:32:39.239465  334214 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:32:39.239488  334214 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:32:39.239557  334214 start.go:360] acquireMachinesLock for old-k8s-version-644335: {Name:mk2d92388f6ee555f9afab8f780d1d668db94689 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:32:39.239638  334214 start.go:364] duration metric: took 43.187µs to acquireMachinesLock for "old-k8s-version-644335"
	I1123 08:32:39.239663  334214 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:32:39.239673  334214 fix.go:54] fixHost starting: 
	I1123 08:32:39.239964  334214 cli_runner.go:164] Run: docker container inspect old-k8s-version-644335 --format={{.State.Status}}
	I1123 08:32:39.261431  334214 fix.go:112] recreateIfNeeded on old-k8s-version-644335: state=Stopped err=<nil>
	W1123 08:32:39.261471  334214 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 08:32:38.339696  326134 node_ready.go:57] node "default-k8s-diff-port-589368" has "Ready":"False" status (will retry)
	W1123 08:32:40.339931  326134 node_ready.go:57] node "default-k8s-diff-port-589368" has "Ready":"False" status (will retry)
	I1123 08:32:39.263461  334214 out.go:252] * Restarting existing docker container for "old-k8s-version-644335" ...
	I1123 08:32:39.263573  334214 cli_runner.go:164] Run: docker start old-k8s-version-644335
	I1123 08:32:39.578380  334214 cli_runner.go:164] Run: docker container inspect old-k8s-version-644335 --format={{.State.Status}}
	I1123 08:32:39.599344  334214 kic.go:430] container "old-k8s-version-644335" state is running.
	I1123 08:32:39.599714  334214 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-644335
	I1123 08:32:39.619854  334214 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/old-k8s-version-644335/config.json ...
	I1123 08:32:39.620127  334214 machine.go:94] provisionDockerMachine start ...
	I1123 08:32:39.620231  334214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-644335
	I1123 08:32:39.641444  334214 main.go:143] libmachine: Using SSH client type: native
	I1123 08:32:39.641824  334214 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1123 08:32:39.641851  334214 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:32:39.645183  334214 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52568->127.0.0.1:33113: read: connection reset by peer
	I1123 08:32:42.809673  334214 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-644335
	
	I1123 08:32:42.809707  334214 ubuntu.go:182] provisioning hostname "old-k8s-version-644335"
	I1123 08:32:42.809768  334214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-644335
	I1123 08:32:42.830869  334214 main.go:143] libmachine: Using SSH client type: native
	I1123 08:32:42.831177  334214 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1123 08:32:42.831201  334214 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-644335 && echo "old-k8s-version-644335" | sudo tee /etc/hostname
	I1123 08:32:42.998268  334214 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-644335
	
	I1123 08:32:42.998353  334214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-644335
	I1123 08:32:43.020413  334214 main.go:143] libmachine: Using SSH client type: native
	I1123 08:32:43.020728  334214 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1123 08:32:43.020757  334214 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-644335' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-644335/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-644335' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:32:43.181445  334214 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:32:43.181483  334214 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10922/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10922/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10922/.minikube}
	I1123 08:32:43.181527  334214 ubuntu.go:190] setting up certificates
	I1123 08:32:43.181540  334214 provision.go:84] configureAuth start
	I1123 08:32:43.181599  334214 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-644335
	I1123 08:32:43.206975  334214 provision.go:143] copyHostCerts
	I1123 08:32:43.207057  334214 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10922/.minikube/ca.pem, removing ...
	I1123 08:32:43.207077  334214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10922/.minikube/ca.pem
	I1123 08:32:43.207155  334214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10922/.minikube/ca.pem (1078 bytes)
	I1123 08:32:43.207549  334214 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10922/.minikube/cert.pem, removing ...
	I1123 08:32:43.207567  334214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10922/.minikube/cert.pem
	I1123 08:32:43.207631  334214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10922/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10922/.minikube/cert.pem (1123 bytes)
	I1123 08:32:43.207731  334214 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10922/.minikube/key.pem, removing ...
	I1123 08:32:43.207741  334214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10922/.minikube/key.pem
	I1123 08:32:43.207781  334214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10922/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10922/.minikube/key.pem (1675 bytes)
	I1123 08:32:43.207858  334214 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10922/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-644335 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-644335]
	I1123 08:32:43.320336  334214 provision.go:177] copyRemoteCerts
	I1123 08:32:43.320403  334214 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:32:43.320444  334214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-644335
	I1123 08:32:43.345921  334214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/old-k8s-version-644335/id_rsa Username:docker}
	I1123 08:32:43.456329  334214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:32:43.477098  334214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1123 08:32:43.496435  334214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:32:43.515298  334214 provision.go:87] duration metric: took 333.746458ms to configureAuth
	I1123 08:32:43.515329  334214 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:32:43.515570  334214 config.go:182] Loaded profile config "old-k8s-version-644335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:32:43.515586  334214 machine.go:97] duration metric: took 3.895440606s to provisionDockerMachine
	I1123 08:32:43.515597  334214 start.go:293] postStartSetup for "old-k8s-version-644335" (driver="docker")
	I1123 08:32:43.515610  334214 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:32:43.515679  334214 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:32:43.515725  334214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-644335
	I1123 08:32:43.536388  334214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/old-k8s-version-644335/id_rsa Username:docker}
	I1123 08:32:43.644938  334214 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:32:43.649170  334214 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:32:43.649199  334214 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:32:43.649213  334214 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10922/.minikube/addons for local assets ...
	I1123 08:32:43.649267  334214 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10922/.minikube/files for local assets ...
	I1123 08:32:43.649377  334214 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-10922/.minikube/files/etc/ssl/certs/144792.pem -> 144792.pem in /etc/ssl/certs
	I1123 08:32:43.649519  334214 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:32:43.659066  334214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10922/.minikube/files/etc/ssl/certs/144792.pem --> /etc/ssl/certs/144792.pem (1708 bytes)
	I1123 08:32:43.682903  334214 start.go:296] duration metric: took 167.290133ms for postStartSetup
	I1123 08:32:43.682989  334214 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:32:43.683039  334214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-644335
	I1123 08:32:43.706273  334214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/old-k8s-version-644335/id_rsa Username:docker}
	I1123 08:32:43.813224  334214 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:32:43.818835  334214 fix.go:56] duration metric: took 4.579156336s for fixHost
	I1123 08:32:43.818875  334214 start.go:83] releasing machines lock for "old-k8s-version-644335", held for 4.579222557s
	I1123 08:32:43.818959  334214 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-644335
	I1123 08:32:43.843351  334214 ssh_runner.go:195] Run: cat /version.json
	I1123 08:32:43.843408  334214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-644335
	I1123 08:32:43.843609  334214 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:32:43.843688  334214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-644335
	I1123 08:32:43.885543  334214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/old-k8s-version-644335/id_rsa Username:docker}
	I1123 08:32:43.895837  334214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/old-k8s-version-644335/id_rsa Username:docker}
	I1123 08:32:44.005914  334214 ssh_runner.go:195] Run: systemctl --version
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	59b49df94c588       56cc512116c8f       9 seconds ago       Running             busybox                   0                   1656cfa16fb3a       busybox                                     default
	d9d52668bdece       52546a367cc9e       14 seconds ago      Running             coredns                   0                   69f54518e3c1b       coredns-66bc5c9577-tckhn                    kube-system
	54a1be0953581       6e38f40d628db       14 seconds ago      Running             storage-provisioner       0                   fe525bd135965       storage-provisioner                         kube-system
	ca3eb96c3d252       409467f978b4a       25 seconds ago      Running             kindnet-cni               0                   9029b858fb87b       kindnet-5mdzd                               kube-system
	b77ff0908bf0c       fc25172553d79       28 seconds ago      Running             kube-proxy                0                   bab040b52c0a6       kube-proxy-v2r6z                            kube-system
	e1519f5508be2       7dd6aaa1717ab       40 seconds ago      Running             kube-scheduler            0                   a7c6095e8192b       kube-scheduler-no-preload-073500            kube-system
	5f32b57cfa865       c80c8dbafe7dd       40 seconds ago      Running             kube-controller-manager   0                   eaf0276d6fa0a       kube-controller-manager-no-preload-073500   kube-system
	7747c2a4bb918       5f1f5298c888d       40 seconds ago      Running             etcd                      0                   c5210c071953a       etcd-no-preload-073500                      kube-system
	1694def06ea2d       c3994bc696102       40 seconds ago      Running             kube-apiserver            0                   bf7191e82270a       kube-apiserver-no-preload-073500            kube-system
	
	
	==> containerd <==
	Nov 23 08:32:30 no-preload-073500 containerd[663]: time="2025-11-23T08:32:30.591738958Z" level=info msg="StartContainer for \"54a1be0953581fccca25c8f76cbfc64195b931deede6c54427fe729f8d7f30b8\""
	Nov 23 08:32:30 no-preload-073500 containerd[663]: time="2025-11-23T08:32:30.592791651Z" level=info msg="connecting to shim 54a1be0953581fccca25c8f76cbfc64195b931deede6c54427fe729f8d7f30b8" address="unix:///run/containerd/s/53ae53f34dc82bdd99c253de3f855bfc478e361fc15173702a9390fda674186f" protocol=ttrpc version=3
	Nov 23 08:32:30 no-preload-073500 containerd[663]: time="2025-11-23T08:32:30.597230868Z" level=info msg="CreateContainer within sandbox \"69f54518e3c1b7ad6f7ba8225da509ad0f08a5a49eadbd5bd45c757a8ad533c7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 23 08:32:30 no-preload-073500 containerd[663]: time="2025-11-23T08:32:30.605288296Z" level=info msg="Container d9d52668bdecec53ab53095253faac50afe25d451ff3890fff55b83714abfd2a: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:32:30 no-preload-073500 containerd[663]: time="2025-11-23T08:32:30.612963411Z" level=info msg="CreateContainer within sandbox \"69f54518e3c1b7ad6f7ba8225da509ad0f08a5a49eadbd5bd45c757a8ad533c7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d9d52668bdecec53ab53095253faac50afe25d451ff3890fff55b83714abfd2a\""
	Nov 23 08:32:30 no-preload-073500 containerd[663]: time="2025-11-23T08:32:30.613577649Z" level=info msg="StartContainer for \"d9d52668bdecec53ab53095253faac50afe25d451ff3890fff55b83714abfd2a\""
	Nov 23 08:32:30 no-preload-073500 containerd[663]: time="2025-11-23T08:32:30.614620684Z" level=info msg="connecting to shim d9d52668bdecec53ab53095253faac50afe25d451ff3890fff55b83714abfd2a" address="unix:///run/containerd/s/ef0025f209461d2dab235a4d5954e1c60afd90c0b85a642778936ae942690d2c" protocol=ttrpc version=3
	Nov 23 08:32:30 no-preload-073500 containerd[663]: time="2025-11-23T08:32:30.644223070Z" level=info msg="StartContainer for \"54a1be0953581fccca25c8f76cbfc64195b931deede6c54427fe729f8d7f30b8\" returns successfully"
	Nov 23 08:32:30 no-preload-073500 containerd[663]: time="2025-11-23T08:32:30.688567769Z" level=info msg="StartContainer for \"d9d52668bdecec53ab53095253faac50afe25d451ff3890fff55b83714abfd2a\" returns successfully"
	Nov 23 08:32:33 no-preload-073500 containerd[663]: time="2025-11-23T08:32:33.145021833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:6da94757-8ca2-4dd7-a188-8675c49bc42b,Namespace:default,Attempt:0,}"
	Nov 23 08:32:33 no-preload-073500 containerd[663]: time="2025-11-23T08:32:33.191995703Z" level=info msg="connecting to shim 1656cfa16fb3af9db7cea1742e8564ebceb31a39b8f24ba6c93a679a5420524c" address="unix:///run/containerd/s/db32b6cdb43b067165f08264dfaa05a57faf78a7c899956dd2d4040b1a35b422" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:32:33 no-preload-073500 containerd[663]: time="2025-11-23T08:32:33.261275010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:6da94757-8ca2-4dd7-a188-8675c49bc42b,Namespace:default,Attempt:0,} returns sandbox id \"1656cfa16fb3af9db7cea1742e8564ebceb31a39b8f24ba6c93a679a5420524c\""
	Nov 23 08:32:33 no-preload-073500 containerd[663]: time="2025-11-23T08:32:33.263174187Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.456490313Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.457204106Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396647"
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.458495742Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.461150595Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.461641259Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.198422116s"
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.461681971Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.465950907Z" level=info msg="CreateContainer within sandbox \"1656cfa16fb3af9db7cea1742e8564ebceb31a39b8f24ba6c93a679a5420524c\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.473922869Z" level=info msg="Container 59b49df94c588d92701b29808a76e4829af8aefb9785d4f7c2e32f320fe0c941: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.480001823Z" level=info msg="CreateContainer within sandbox \"1656cfa16fb3af9db7cea1742e8564ebceb31a39b8f24ba6c93a679a5420524c\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"59b49df94c588d92701b29808a76e4829af8aefb9785d4f7c2e32f320fe0c941\""
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.480714197Z" level=info msg="StartContainer for \"59b49df94c588d92701b29808a76e4829af8aefb9785d4f7c2e32f320fe0c941\""
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.481579280Z" level=info msg="connecting to shim 59b49df94c588d92701b29808a76e4829af8aefb9785d4f7c2e32f320fe0c941" address="unix:///run/containerd/s/db32b6cdb43b067165f08264dfaa05a57faf78a7c899956dd2d4040b1a35b422" protocol=ttrpc version=3
	Nov 23 08:32:35 no-preload-073500 containerd[663]: time="2025-11-23T08:32:35.535084547Z" level=info msg="StartContainer for \"59b49df94c588d92701b29808a76e4829af8aefb9785d4f7c2e32f320fe0c941\" returns successfully"
	
	
	==> coredns [d9d52668bdecec53ab53095253faac50afe25d451ff3890fff55b83714abfd2a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54270 - 36156 "HINFO IN 4655859727813600607.7439009089244566345. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018318146s
	
	
	==> describe nodes <==
	Name:               no-preload-073500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-073500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=no-preload-073500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_32_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:32:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-073500
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:32:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:32:40 +0000   Sun, 23 Nov 2025 08:32:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:32:40 +0000   Sun, 23 Nov 2025 08:32:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:32:40 +0000   Sun, 23 Nov 2025 08:32:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:32:40 +0000   Sun, 23 Nov 2025 08:32:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-073500
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                fc918513-edbd-4c0e-aaa2-f8e0714cc389
	  Boot ID:                    5380b858-5e3f-4ab2-b78d-8704cd56a682
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-tckhn                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-no-preload-073500                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         37s
	  kube-system                 kindnet-5mdzd                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-073500             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-no-preload-073500    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-v2r6z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-073500             100m (1%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28s   kube-proxy       
	  Normal  Starting                 36s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  36s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  36s   kubelet          Node no-preload-073500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s   kubelet          Node no-preload-073500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s   kubelet          Node no-preload-073500 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s   node-controller  Node no-preload-073500 event: Registered Node no-preload-073500 in Controller
	  Normal  NodeReady                15s   kubelet          Node no-preload-073500 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 7d 09 6f 5f 2b 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 d4 5e e6 42 49 08 06
	[ +11.373766] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 a4 f8 6b 15 37 08 06
	[  +0.013916] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 7c 96 ae 15 dc 08 06
	[ +40.470104] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 69 b6 fd a9 36 08 06
	[  +0.167388] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 d3 04 3f 4c f4 08 06
	[  +2.400864] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 01 20 fe a4 35 08 06
	[  +0.000386] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 7c 96 ae 15 dc 08 06
	[  +5.210763] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 68 6e 21 c9 1f 08 06
	[Nov23 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a c0 03 9d 77 98 08 06
	[  +0.000409] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 68 6e 21 c9 1f 08 06
	[ +19.602508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 9b 99 36 e6 f4 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e 69 b6 fd a9 36 08 06
	
	
	==> etcd [7747c2a4bb918ea617bd6f19c4317f791d1656c2fb70574737d1f447f22de3f9] <==
	{"level":"info","ts":"2025-11-23T08:32:07.643677Z","caller":"traceutil/trace.go:172","msg":"trace[769291562] transaction","detail":"{read_only:false; response_revision:26; number_of_response:1; }","duration":"298.874236ms","start":"2025-11-23T08:32:07.344798Z","end":"2025-11-23T08:32:07.643672Z","steps":["trace[769291562] 'process raft request'  (duration: 298.165892ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:07.644854Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:32:07.344785Z","time spent":"300.0352ms","remote":"127.0.0.1:59066","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":965,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1.coordination.k8s.io\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1.coordination.k8s.io\" value_size:890 >> failure:<>"}
	{"level":"info","ts":"2025-11-23T08:32:07.643750Z","caller":"traceutil/trace.go:172","msg":"trace[460890132] transaction","detail":"{read_only:false; response_revision:27; number_of_response:1; }","duration":"298.138118ms","start":"2025-11-23T08:32:07.345605Z","end":"2025-11-23T08:32:07.643743Z","steps":["trace[460890132] 'process raft request'  (duration: 297.391504ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:07.643839Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"275.876506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:32:07.645074Z","caller":"traceutil/trace.go:172","msg":"trace[1424263317] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:29; }","duration":"277.112844ms","start":"2025-11-23T08:32:07.367953Z","end":"2025-11-23T08:32:07.645066Z","steps":["trace[1424263317] 'agreement among raft nodes before linearized reading'  (duration: 275.859507ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.775362Z","caller":"traceutil/trace.go:172","msg":"trace[781143661] linearizableReadLoop","detail":"{readStateIndex:33; appliedIndex:33; }","duration":"124.463234ms","start":"2025-11-23T08:32:07.650872Z","end":"2025-11-23T08:32:07.775335Z","steps":["trace[781143661] 'read index received'  (duration: 124.453748ms)","trace[781143661] 'applied index is now lower than readState.Index'  (duration: 7.855µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:32:07.818465Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"167.566825ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:32:07.818549Z","caller":"traceutil/trace.go:172","msg":"trace[931802762] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:0; response_revision:29; }","duration":"167.664812ms","start":"2025-11-23T08:32:07.650868Z","end":"2025-11-23T08:32:07.818533Z","steps":["trace[931802762] 'agreement among raft nodes before linearized reading'  (duration: 124.568759ms)","trace[931802762] 'range keys from in-memory index tree'  (duration: 42.959053ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:32:07.818955Z","caller":"traceutil/trace.go:172","msg":"trace[1362876802] transaction","detail":"{read_only:false; response_revision:30; number_of_response:1; }","duration":"169.367853ms","start":"2025-11-23T08:32:07.649565Z","end":"2025-11-23T08:32:07.818933Z","steps":["trace[1362876802] 'process raft request'  (duration: 125.825774ms)","trace[1362876802] 'compare'  (duration: 43.169374ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:32:07.819144Z","caller":"traceutil/trace.go:172","msg":"trace[1525454215] transaction","detail":"{read_only:false; response_revision:32; number_of_response:1; }","duration":"167.444236ms","start":"2025-11-23T08:32:07.651690Z","end":"2025-11-23T08:32:07.819134Z","steps":["trace[1525454215] 'process raft request'  (duration: 167.323668ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.819246Z","caller":"traceutil/trace.go:172","msg":"trace[1448607295] transaction","detail":"{read_only:false; response_revision:38; number_of_response:1; }","duration":"161.340814ms","start":"2025-11-23T08:32:07.657891Z","end":"2025-11-23T08:32:07.819232Z","steps":["trace[1448607295] 'process raft request'  (duration: 161.281891ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.819299Z","caller":"traceutil/trace.go:172","msg":"trace[1577299142] transaction","detail":"{read_only:false; response_revision:31; number_of_response:1; }","duration":"168.518704ms","start":"2025-11-23T08:32:07.650774Z","end":"2025-11-23T08:32:07.819292Z","steps":["trace[1577299142] 'process raft request'  (duration: 167.977439ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.819521Z","caller":"traceutil/trace.go:172","msg":"trace[143932226] transaction","detail":"{read_only:false; response_revision:33; number_of_response:1; }","duration":"166.685786ms","start":"2025-11-23T08:32:07.652791Z","end":"2025-11-23T08:32:07.819477Z","steps":["trace[143932226] 'process raft request'  (duration: 166.260296ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.819650Z","caller":"traceutil/trace.go:172","msg":"trace[1578508801] transaction","detail":"{read_only:false; response_revision:34; number_of_response:1; }","duration":"166.705911ms","start":"2025-11-23T08:32:07.652934Z","end":"2025-11-23T08:32:07.819640Z","steps":["trace[1578508801] 'process raft request'  (duration: 166.146137ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.819688Z","caller":"traceutil/trace.go:172","msg":"trace[1622845589] transaction","detail":"{read_only:false; response_revision:37; number_of_response:1; }","duration":"162.393719ms","start":"2025-11-23T08:32:07.657285Z","end":"2025-11-23T08:32:07.819679Z","steps":["trace[1622845589] 'process raft request'  (duration: 161.861299ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.819766Z","caller":"traceutil/trace.go:172","msg":"trace[783382252] transaction","detail":"{read_only:false; response_revision:35; number_of_response:1; }","duration":"164.250795ms","start":"2025-11-23T08:32:07.655496Z","end":"2025-11-23T08:32:07.819747Z","steps":["trace[783382252] 'process raft request'  (duration: 163.604629ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:07.819854Z","caller":"traceutil/trace.go:172","msg":"trace[1664285353] transaction","detail":"{read_only:false; response_revision:36; number_of_response:1; }","duration":"164.21887ms","start":"2025-11-23T08:32:07.655630Z","end":"2025-11-23T08:32:07.819848Z","steps":["trace[1664285353] 'process raft request'  (duration: 163.487951ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:32:08.139325Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"231.059336ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/admin\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:32:08.139395Z","caller":"traceutil/trace.go:172","msg":"trace[1206373715] range","detail":"{range_begin:/registry/clusterroles/admin; range_end:; response_count:0; response_revision:42; }","duration":"231.143078ms","start":"2025-11-23T08:32:07.908235Z","end":"2025-11-23T08:32:08.139378Z","steps":["trace[1206373715] 'agreement among raft nodes before linearized reading'  (duration: 28.248611ms)","trace[1206373715] 'range keys from in-memory index tree'  (duration: 202.749343ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:32:08.139427Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"202.812961ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790207116312244 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/priorityclasses/system-node-critical\" mod_revision:0 > success:<request_put:<key:\"/registry/priorityclasses/system-node-critical\" value_size:375 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-23T08:32:08.139554Z","caller":"traceutil/trace.go:172","msg":"trace[108601188] transaction","detail":"{read_only:false; response_revision:44; number_of_response:1; }","duration":"231.52373ms","start":"2025-11-23T08:32:07.908020Z","end":"2025-11-23T08:32:08.139544Z","steps":["trace[108601188] 'process raft request'  (duration: 231.457893ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:32:08.139639Z","caller":"traceutil/trace.go:172","msg":"trace[321238394] linearizableReadLoop","detail":"{readStateIndex:47; appliedIndex:46; }","duration":"138.949542ms","start":"2025-11-23T08:32:08.000662Z","end":"2025-11-23T08:32:08.139611Z","steps":["trace[321238394] 'read index received'  (duration: 39.646615ms)","trace[321238394] 'applied index is now lower than readState.Index'  (duration: 99.301683ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:32:08.139735Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.069951ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:32:08.139741Z","caller":"traceutil/trace.go:172","msg":"trace[1262351696] transaction","detail":"{read_only:false; response_revision:43; number_of_response:1; }","duration":"235.918521ms","start":"2025-11-23T08:32:07.903811Z","end":"2025-11-23T08:32:08.139729Z","steps":["trace[1262351696] 'process raft request'  (duration: 32.750886ms)","trace[1262351696] 'compare'  (duration: 202.704583ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:32:08.139768Z","caller":"traceutil/trace.go:172","msg":"trace[1127088331] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:44; }","duration":"139.112234ms","start":"2025-11-23T08:32:08.000649Z","end":"2025-11-23T08:32:08.139761Z","steps":["trace[1127088331] 'agreement among raft nodes before linearized reading'  (duration: 139.035395ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:32:45 up  1:15,  0 user,  load average: 4.99, 3.91, 2.54
	Linux no-preload-073500 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ca3eb96c3d252adb3057593371f40b3caf4e1228909a36a8699f2a79d6fb6cce] <==
	I1123 08:32:19.793491       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:32:19.827580       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1123 08:32:19.827755       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:32:19.827778       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:32:19.827812       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:32:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:32:20.087956       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:32:20.087998       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:32:20.088010       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:32:20.088752       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:32:20.688105       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:32:20.688135       1 metrics.go:72] Registering metrics
	I1123 08:32:20.688238       1 controller.go:711] "Syncing nftables rules"
	I1123 08:32:30.032934       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 08:32:30.032973       1 main.go:301] handling current node
	I1123 08:32:40.033840       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 08:32:40.033903       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1694def06ea2d8a155cb47af99c0d04cc3c44caf1a6329d23288f186b50ea989] <==
	I1123 08:32:06.939678       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1123 08:32:06.998672       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1123 08:32:07.000270       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1123 08:32:07.001030       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:32:07.340889       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:07.341480       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:32:07.344956       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:32:08.142162       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:32:08.153670       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:32:08.153694       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:32:08.906637       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:32:08.978387       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:32:09.111440       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:32:09.126250       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1123 08:32:09.127721       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:32:09.132650       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:32:09.866578       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:32:09.929957       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:32:09.944340       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:32:09.954747       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:32:15.275873       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:15.286353       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:15.620739       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:32:15.972606       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 08:32:41.952875       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:49666: use of closed network connection
	
	
	==> kube-controller-manager [5f32b57cfa8650937c60154398a896b081e003e9946e18a8cd77530fe24a60b9] <==
	I1123 08:32:14.866104       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 08:32:14.866191       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:32:14.866199       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:32:14.866471       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:32:14.867007       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 08:32:14.867045       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:32:14.867042       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:32:14.867117       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 08:32:14.867133       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:32:14.867152       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:32:14.868356       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:32:14.871687       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:32:14.871790       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 08:32:14.871814       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:32:14.871864       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:32:14.871890       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 08:32:14.871941       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 08:32:14.871984       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 08:32:14.871995       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 08:32:14.872367       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:32:14.878808       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-073500" podCIDRs=["10.244.0.0/24"]
	I1123 08:32:14.881730       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:32:14.884838       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:32:14.894113       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:32:34.821841       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b77ff0908bf0c0a81956c8305a20c3edf1b1187baf6a6728315a8b4103b53bae] <==
	I1123 08:32:16.640985       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:32:16.723559       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:32:16.824314       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:32:16.824421       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1123 08:32:16.824564       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:32:16.848328       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:32:16.848388       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:32:16.853720       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:32:16.854203       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:32:16.854259       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:32:16.855970       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:32:16.856008       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:32:16.855982       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:32:16.856040       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:32:16.855977       1 config.go:200] "Starting service config controller"
	I1123 08:32:16.856051       1 config.go:309] "Starting node config controller"
	I1123 08:32:16.856053       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:32:16.856064       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:32:16.856072       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:32:16.956484       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:32:16.956560       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:32:16.956528       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e1519f5508be284cdd41d5d4c9cfd97cf0032a8eed4b95badacc618119647cc2] <==
	E1123 08:32:06.903417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:32:06.903486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:32:06.903608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:32:06.903686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:32:06.903838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:32:07.740422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:32:07.826496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:32:07.858290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:32:07.898684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:32:07.904813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:32:07.906864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:32:07.911338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:32:07.971666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:32:08.000537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:32:08.069098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:32:08.086496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:32:08.121297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 08:32:08.198700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:32:08.219278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:32:08.279135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:32:08.315740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:32:08.357627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:32:08.424586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:32:08.452104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1123 08:32:10.491291       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:32:10 no-preload-073500 kubelet[2208]: E1123 08:32:10.792735    2208 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-no-preload-073500\" already exists" pod="kube-system/kube-scheduler-no-preload-073500"
	Nov 23 08:32:10 no-preload-073500 kubelet[2208]: I1123 08:32:10.809684    2208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-073500" podStartSLOduration=2.8096579090000002 podStartE2EDuration="2.809657909s" podCreationTimestamp="2025-11-23 08:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:10.809612174 +0000 UTC m=+1.142649339" watchObservedRunningTime="2025-11-23 08:32:10.809657909 +0000 UTC m=+1.142695049"
	Nov 23 08:32:10 no-preload-073500 kubelet[2208]: I1123 08:32:10.837075    2208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-073500" podStartSLOduration=1.837054575 podStartE2EDuration="1.837054575s" podCreationTimestamp="2025-11-23 08:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:10.836942669 +0000 UTC m=+1.169979819" watchObservedRunningTime="2025-11-23 08:32:10.837054575 +0000 UTC m=+1.170091710"
	Nov 23 08:32:10 no-preload-073500 kubelet[2208]: I1123 08:32:10.837195    2208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-073500" podStartSLOduration=2.837186182 podStartE2EDuration="2.837186182s" podCreationTimestamp="2025-11-23 08:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:10.824152295 +0000 UTC m=+1.157189435" watchObservedRunningTime="2025-11-23 08:32:10.837186182 +0000 UTC m=+1.170223322"
	Nov 23 08:32:10 no-preload-073500 kubelet[2208]: I1123 08:32:10.849085    2208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-073500" podStartSLOduration=1.849046651 podStartE2EDuration="1.849046651s" podCreationTimestamp="2025-11-23 08:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:10.848589542 +0000 UTC m=+1.181626704" watchObservedRunningTime="2025-11-23 08:32:10.849046651 +0000 UTC m=+1.182083791"
	Nov 23 08:32:14 no-preload-073500 kubelet[2208]: I1123 08:32:14.950016    2208 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:32:14 no-preload-073500 kubelet[2208]: I1123 08:32:14.950823    2208 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:32:16 no-preload-073500 kubelet[2208]: I1123 08:32:16.100173    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3db1b11c-8ba0-4951-a834-1a88c8abade4-xtables-lock\") pod \"kube-proxy-v2r6z\" (UID: \"3db1b11c-8ba0-4951-a834-1a88c8abade4\") " pod="kube-system/kube-proxy-v2r6z"
	Nov 23 08:32:16 no-preload-073500 kubelet[2208]: I1123 08:32:16.100706    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3db1b11c-8ba0-4951-a834-1a88c8abade4-lib-modules\") pod \"kube-proxy-v2r6z\" (UID: \"3db1b11c-8ba0-4951-a834-1a88c8abade4\") " pod="kube-system/kube-proxy-v2r6z"
	Nov 23 08:32:16 no-preload-073500 kubelet[2208]: I1123 08:32:16.100810    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7w7t\" (UniqueName: \"kubernetes.io/projected/3db1b11c-8ba0-4951-a834-1a88c8abade4-kube-api-access-g7w7t\") pod \"kube-proxy-v2r6z\" (UID: \"3db1b11c-8ba0-4951-a834-1a88c8abade4\") " pod="kube-system/kube-proxy-v2r6z"
	Nov 23 08:32:16 no-preload-073500 kubelet[2208]: I1123 08:32:16.100848    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30acd1c0-02f9-4309-b601-2173a8bf74e4-xtables-lock\") pod \"kindnet-5mdzd\" (UID: \"30acd1c0-02f9-4309-b601-2173a8bf74e4\") " pod="kube-system/kindnet-5mdzd"
	Nov 23 08:32:16 no-preload-073500 kubelet[2208]: I1123 08:32:16.100869    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30acd1c0-02f9-4309-b601-2173a8bf74e4-lib-modules\") pod \"kindnet-5mdzd\" (UID: \"30acd1c0-02f9-4309-b601-2173a8bf74e4\") " pod="kube-system/kindnet-5mdzd"
	Nov 23 08:32:16 no-preload-073500 kubelet[2208]: I1123 08:32:16.100899    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3db1b11c-8ba0-4951-a834-1a88c8abade4-kube-proxy\") pod \"kube-proxy-v2r6z\" (UID: \"3db1b11c-8ba0-4951-a834-1a88c8abade4\") " pod="kube-system/kube-proxy-v2r6z"
	Nov 23 08:32:16 no-preload-073500 kubelet[2208]: I1123 08:32:16.100920    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/30acd1c0-02f9-4309-b601-2173a8bf74e4-cni-cfg\") pod \"kindnet-5mdzd\" (UID: \"30acd1c0-02f9-4309-b601-2173a8bf74e4\") " pod="kube-system/kindnet-5mdzd"
	Nov 23 08:32:16 no-preload-073500 kubelet[2208]: I1123 08:32:16.100943    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgl6h\" (UniqueName: \"kubernetes.io/projected/30acd1c0-02f9-4309-b601-2173a8bf74e4-kube-api-access-sgl6h\") pod \"kindnet-5mdzd\" (UID: \"30acd1c0-02f9-4309-b601-2173a8bf74e4\") " pod="kube-system/kindnet-5mdzd"
	Nov 23 08:32:16 no-preload-073500 kubelet[2208]: I1123 08:32:16.816000    2208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v2r6z" podStartSLOduration=1.81597631 podStartE2EDuration="1.81597631s" podCreationTimestamp="2025-11-23 08:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:16.815895813 +0000 UTC m=+7.148932953" watchObservedRunningTime="2025-11-23 08:32:16.81597631 +0000 UTC m=+7.149013450"
	Nov 23 08:32:21 no-preload-073500 kubelet[2208]: I1123 08:32:21.802383    2208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-5mdzd" podStartSLOduration=4.016599938 podStartE2EDuration="6.802359972s" podCreationTimestamp="2025-11-23 08:32:15 +0000 UTC" firstStartedPulling="2025-11-23 08:32:16.705364226 +0000 UTC m=+7.038401360" lastFinishedPulling="2025-11-23 08:32:19.491124253 +0000 UTC m=+9.824161394" observedRunningTime="2025-11-23 08:32:19.826005402 +0000 UTC m=+10.159042554" watchObservedRunningTime="2025-11-23 08:32:21.802359972 +0000 UTC m=+12.135397113"
	Nov 23 08:32:30 no-preload-073500 kubelet[2208]: I1123 08:32:30.115990    2208 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:32:30 no-preload-073500 kubelet[2208]: I1123 08:32:30.207588    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cf8f0fc5-565a-4403-b360-e6bdc35785cf-tmp\") pod \"storage-provisioner\" (UID: \"cf8f0fc5-565a-4403-b360-e6bdc35785cf\") " pod="kube-system/storage-provisioner"
	Nov 23 08:32:30 no-preload-073500 kubelet[2208]: I1123 08:32:30.207633    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc0286b1-520a-4d92-998a-4c366264e9a1-config-volume\") pod \"coredns-66bc5c9577-tckhn\" (UID: \"cc0286b1-520a-4d92-998a-4c366264e9a1\") " pod="kube-system/coredns-66bc5c9577-tckhn"
	Nov 23 08:32:30 no-preload-073500 kubelet[2208]: I1123 08:32:30.207656    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk4mt\" (UniqueName: \"kubernetes.io/projected/cf8f0fc5-565a-4403-b360-e6bdc35785cf-kube-api-access-vk4mt\") pod \"storage-provisioner\" (UID: \"cf8f0fc5-565a-4403-b360-e6bdc35785cf\") " pod="kube-system/storage-provisioner"
	Nov 23 08:32:30 no-preload-073500 kubelet[2208]: I1123 08:32:30.207674    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzsnc\" (UniqueName: \"kubernetes.io/projected/cc0286b1-520a-4d92-998a-4c366264e9a1-kube-api-access-xzsnc\") pod \"coredns-66bc5c9577-tckhn\" (UID: \"cc0286b1-520a-4d92-998a-4c366264e9a1\") " pod="kube-system/coredns-66bc5c9577-tckhn"
	Nov 23 08:32:30 no-preload-073500 kubelet[2208]: I1123 08:32:30.852447    2208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tckhn" podStartSLOduration=14.852428191 podStartE2EDuration="14.852428191s" podCreationTimestamp="2025-11-23 08:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:30.852244113 +0000 UTC m=+21.185281250" watchObservedRunningTime="2025-11-23 08:32:30.852428191 +0000 UTC m=+21.185465333"
	Nov 23 08:32:30 no-preload-073500 kubelet[2208]: I1123 08:32:30.873881    2208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.87386041 podStartE2EDuration="14.87386041s" podCreationTimestamp="2025-11-23 08:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:30.873464003 +0000 UTC m=+21.206501155" watchObservedRunningTime="2025-11-23 08:32:30.87386041 +0000 UTC m=+21.206897551"
	Nov 23 08:32:32 no-preload-073500 kubelet[2208]: I1123 08:32:32.925736    2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9fsh\" (UniqueName: \"kubernetes.io/projected/6da94757-8ca2-4dd7-a188-8675c49bc42b-kube-api-access-l9fsh\") pod \"busybox\" (UID: \"6da94757-8ca2-4dd7-a188-8675c49bc42b\") " pod="default/busybox"
	
	
	==> storage-provisioner [54a1be0953581fccca25c8f76cbfc64195b931deede6c54427fe729f8d7f30b8] <==
	I1123 08:32:30.652804       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:32:30.663901       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:32:30.663984       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:32:30.666578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:30.671753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:32:30.671976       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:32:30.672089       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"14d3c6e4-f334-41a4-9fb6-618849a813fc", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-073500_af9e803d-08e2-4d92-968c-b638ec812417 became leader
	I1123 08:32:30.672192       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-073500_af9e803d-08e2-4d92-968c-b638ec812417!
	W1123 08:32:30.677189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:30.681594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:32:30.773607       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-073500_af9e803d-08e2-4d92-968c-b638ec812417!
	W1123 08:32:32.685644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:32.693680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:34.697623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:34.703656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:36.706732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:36.711930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:38.716137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:38.721144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:40.724331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:40.728897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:42.732990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:42.739540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:44.743731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:44.748052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-073500 -n no-preload-073500
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-073500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (13.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (14.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-589368 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f383f151-7f4a-4182-acac-584b4e100ec0] Pending
helpers_test.go:352: "busybox" [f383f151-7f4a-4182-acac-584b4e100ec0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f383f151-7f4a-4182-acac-584b4e100ec0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004566188s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-589368 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-589368
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-589368:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7026f23687d99eff540d35c4c568c12f474daa74225eff3a0d9720e4ba1650bd",
	        "Created": "2025-11-23T08:32:08.28959015Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 327679,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:32:08.338758514Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/7026f23687d99eff540d35c4c568c12f474daa74225eff3a0d9720e4ba1650bd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7026f23687d99eff540d35c4c568c12f474daa74225eff3a0d9720e4ba1650bd/hostname",
	        "HostsPath": "/var/lib/docker/containers/7026f23687d99eff540d35c4c568c12f474daa74225eff3a0d9720e4ba1650bd/hosts",
	        "LogPath": "/var/lib/docker/containers/7026f23687d99eff540d35c4c568c12f474daa74225eff3a0d9720e4ba1650bd/7026f23687d99eff540d35c4c568c12f474daa74225eff3a0d9720e4ba1650bd-json.log",
	        "Name": "/default-k8s-diff-port-589368",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-589368:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-589368",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7026f23687d99eff540d35c4c568c12f474daa74225eff3a0d9720e4ba1650bd",
	                "LowerDir": "/var/lib/docker/overlay2/b82b1b3b45ed1cf02b1edb2ec06e7a053c7ffb220085780dbd268e82558ca976-init/diff:/var/lib/docker/overlay2/f8ae64c4d7d1e12e69b7d69a01d34a96c2f353aeac48a9b438b028f010c32149/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b82b1b3b45ed1cf02b1edb2ec06e7a053c7ffb220085780dbd268e82558ca976/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b82b1b3b45ed1cf02b1edb2ec06e7a053c7ffb220085780dbd268e82558ca976/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b82b1b3b45ed1cf02b1edb2ec06e7a053c7ffb220085780dbd268e82558ca976/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-589368",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-589368/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-589368",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-589368",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-589368",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d32cd6a3fbe376f6b89280b01aae7dae826ab2160b61401aa4f3e15f85163475",
	            "SandboxKey": "/var/run/docker/netns/d32cd6a3fbe3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-589368": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "569c49b64783e7db5048373c78e482a2033020bba3d0ef79a674f75891c7df96",
	                    "EndpointID": "85a80aa4e222e853e9d89deeb5f807e1e52aa1d65fefcb0995901499a143d252",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "da:13:2a:65:fb:31",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-589368",
	                        "7026f23687d9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-589368 -n default-k8s-diff-port-589368
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-589368 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-589368 logs -n 25: (1.091364328s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p bridge-366757 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cri-dockerd --version                                                                                                                                                                                                         │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo containerd config dump                                                                                                                                                                                                        │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo crio config                                                                                                                                                                                                                   │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ delete  │ -p bridge-366757                                                                                                                                                                                                                                    │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:32 UTC │
	│ delete  │ -p disable-driver-mounts-900754                                                                                                                                                                                                                     │ disable-driver-mounts-900754 │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ start   │ -p default-k8s-diff-port-589368 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-589368 │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-644335 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ stop    │ -p old-k8s-version-644335 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-644335 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ start   │ -p old-k8s-version-644335 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-329854 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-329854           │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ stop    │ -p embed-certs-329854 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-329854           │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ addons  │ enable metrics-server -p no-preload-073500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-073500            │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ stop    │ -p no-preload-073500 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-073500            │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-329854 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-329854           │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ start   │ -p embed-certs-329854 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-329854           │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:32:56
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:32:56.586822  340873 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:32:56.587167  340873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:32:56.587174  340873 out.go:374] Setting ErrFile to fd 2...
	I1123 08:32:56.587180  340873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:32:56.587481  340873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
	I1123 08:32:56.588083  340873 out.go:368] Setting JSON to false
	I1123 08:32:56.589406  340873 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4515,"bootTime":1763882262,"procs":334,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:32:56.589517  340873 start.go:143] virtualization: kvm guest
	I1123 08:32:56.591492  340873 out.go:179] * [embed-certs-329854] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:32:56.593261  340873 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:32:56.593244  340873 notify.go:221] Checking for updates...
	I1123 08:32:56.594444  340873 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:32:56.595707  340873 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig
	I1123 08:32:56.596921  340873 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube
	I1123 08:32:56.598343  340873 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:32:56.599978  340873 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	5ec957a59827f       56cc512116c8f       8 seconds ago       Running             busybox                   0                   333e43aa76b87       busybox                                                default
	d1c2813ca8219       52546a367cc9e       13 seconds ago      Running             coredns                   0                   22274ef49cb25       coredns-66bc5c9577-6xpjl                               kube-system
	be2a8d916be6a       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   266ec11be519c       storage-provisioner                                    kube-system
	a5dc2a2c436e1       409467f978b4a       25 seconds ago      Running             kindnet-cni               0                   0698256e676e6       kindnet-5jwgt                                          kube-system
	4a23395ddf486       fc25172553d79       25 seconds ago      Running             kube-proxy                0                   bd0744886815a       kube-proxy-gtbbd                                       kube-system
	3ecc6cbf82c2a       c80c8dbafe7dd       35 seconds ago      Running             kube-controller-manager   0                   ca1ed7a60071f       kube-controller-manager-default-k8s-diff-port-589368   kube-system
	dc60785939b1a       c3994bc696102       35 seconds ago      Running             kube-apiserver            0                   1745e5b26e36c       kube-apiserver-default-k8s-diff-port-589368            kube-system
	e493d61ab271c       5f1f5298c888d       35 seconds ago      Running             etcd                      0                   e1c6b4821e8c4       etcd-default-k8s-diff-port-589368                      kube-system
	cc58955607624       7dd6aaa1717ab       36 seconds ago      Running             kube-scheduler            0                   a05a3bd236cb4       kube-scheduler-default-k8s-diff-port-589368            kube-system
	
	
	==> containerd <==
	Nov 23 08:32:43 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:43.197316859Z" level=info msg="StartContainer for \"be2a8d916be6abca8d80797137b844a6c5a7660c10516a7b8fa04cca64214e28\""
	Nov 23 08:32:43 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:43.198417736Z" level=info msg="connecting to shim be2a8d916be6abca8d80797137b844a6c5a7660c10516a7b8fa04cca64214e28" address="unix:///run/containerd/s/fae407416a19ab1e062250e5dc10470fc1c995379aae77a40c595dc292637852" protocol=ttrpc version=3
	Nov 23 08:32:43 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:43.205223714Z" level=info msg="CreateContainer within sandbox \"22274ef49cb25ed8d724c0c84dbdd19576fdcf1e73aa200761a84ccf31a4e506\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 23 08:32:43 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:43.212716251Z" level=info msg="Container d1c2813ca82197c6dee0ef13df63254e58416a461c7e36203f4207a0ffdd5d2b: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:32:43 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:43.219589132Z" level=info msg="CreateContainer within sandbox \"22274ef49cb25ed8d724c0c84dbdd19576fdcf1e73aa200761a84ccf31a4e506\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d1c2813ca82197c6dee0ef13df63254e58416a461c7e36203f4207a0ffdd5d2b\""
	Nov 23 08:32:43 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:43.220151988Z" level=info msg="StartContainer for \"d1c2813ca82197c6dee0ef13df63254e58416a461c7e36203f4207a0ffdd5d2b\""
	Nov 23 08:32:43 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:43.221291339Z" level=info msg="connecting to shim d1c2813ca82197c6dee0ef13df63254e58416a461c7e36203f4207a0ffdd5d2b" address="unix:///run/containerd/s/2fcb61eee781f57545d8e23626d973c97d5e7c1c62fe7e6d865cb4007cdac44d" protocol=ttrpc version=3
	Nov 23 08:32:43 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:43.282484784Z" level=info msg="StartContainer for \"be2a8d916be6abca8d80797137b844a6c5a7660c10516a7b8fa04cca64214e28\" returns successfully"
	Nov 23 08:32:43 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:43.297672596Z" level=info msg="StartContainer for \"d1c2813ca82197c6dee0ef13df63254e58416a461c7e36203f4207a0ffdd5d2b\" returns successfully"
	Nov 23 08:32:46 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:46.349101924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f383f151-7f4a-4182-acac-584b4e100ec0,Namespace:default,Attempt:0,}"
	Nov 23 08:32:46 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:46.389463783Z" level=info msg="connecting to shim 333e43aa76b879b767587517de8a817691152ff13001c3b8ba05c6281632b2e6" address="unix:///run/containerd/s/8629df459abdaa7ab30d8d9e344079a8759f3d27182860b3296f16df90e974ff" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:32:46 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:46.522038514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f383f151-7f4a-4182-acac-584b4e100ec0,Namespace:default,Attempt:0,} returns sandbox id \"333e43aa76b879b767587517de8a817691152ff13001c3b8ba05c6281632b2e6\""
	Nov 23 08:32:46 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:46.526698888Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.797094342Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.798496788Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396646"
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.800179245Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.802496812Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.803858276Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.277090751s"
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.803918680Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.813233313Z" level=info msg="CreateContainer within sandbox \"333e43aa76b879b767587517de8a817691152ff13001c3b8ba05c6281632b2e6\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.821430687Z" level=info msg="Container 5ec957a59827f402808edee55b06bd982e451b44cd873da3b7bdda936015cd68: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.835926315Z" level=info msg="CreateContainer within sandbox \"333e43aa76b879b767587517de8a817691152ff13001c3b8ba05c6281632b2e6\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"5ec957a59827f402808edee55b06bd982e451b44cd873da3b7bdda936015cd68\""
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.837479508Z" level=info msg="StartContainer for \"5ec957a59827f402808edee55b06bd982e451b44cd873da3b7bdda936015cd68\""
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.839881557Z" level=info msg="connecting to shim 5ec957a59827f402808edee55b06bd982e451b44cd873da3b7bdda936015cd68" address="unix:///run/containerd/s/8629df459abdaa7ab30d8d9e344079a8759f3d27182860b3296f16df90e974ff" protocol=ttrpc version=3
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.921322009Z" level=info msg="StartContainer for \"5ec957a59827f402808edee55b06bd982e451b44cd873da3b7bdda936015cd68\" returns successfully"
	
	
	==> coredns [d1c2813ca82197c6dee0ef13df63254e58416a461c7e36203f4207a0ffdd5d2b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47837 - 65458 "HINFO IN 1778367819640139400.7981754308843983188. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.074622053s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-589368
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-589368
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=default-k8s-diff-port-589368
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_32_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:32:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-589368
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:32:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:32:56 +0000   Sun, 23 Nov 2025 08:32:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:32:56 +0000   Sun, 23 Nov 2025 08:32:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:32:56 +0000   Sun, 23 Nov 2025 08:32:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:32:56 +0000   Sun, 23 Nov 2025 08:32:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-589368
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                ac8512a9-bb00-4f66-aa97-e9e94775edd0
	  Boot ID:                    5380b858-5e3f-4ab2-b78d-8704cd56a682
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-6xpjl                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-default-k8s-diff-port-589368                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-5jwgt                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-589368             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-589368    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-gtbbd                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-589368             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  32s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node default-k8s-diff-port-589368 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node default-k8s-diff-port-589368 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node default-k8s-diff-port-589368 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node default-k8s-diff-port-589368 event: Registered Node default-k8s-diff-port-589368 in Controller
	  Normal  NodeReady                15s   kubelet          Node default-k8s-diff-port-589368 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 7d 09 6f 5f 2b 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 d4 5e e6 42 49 08 06
	[ +11.373766] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 a4 f8 6b 15 37 08 06
	[  +0.013916] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 7c 96 ae 15 dc 08 06
	[ +40.470104] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 69 b6 fd a9 36 08 06
	[  +0.167388] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 d3 04 3f 4c f4 08 06
	[  +2.400864] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 01 20 fe a4 35 08 06
	[  +0.000386] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 7c 96 ae 15 dc 08 06
	[  +5.210763] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 68 6e 21 c9 1f 08 06
	[Nov23 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a c0 03 9d 77 98 08 06
	[  +0.000409] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 68 6e 21 c9 1f 08 06
	[ +19.602508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 9b 99 36 e6 f4 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e 69 b6 fd a9 36 08 06
	
	
	==> etcd [e493d61ab271cc67905737050421c6c6bf56e68a838e8ed6cf3dde6327c413d6] <==
	{"level":"warn","ts":"2025-11-23T08:32:22.436131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.444406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.454623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.465381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.478339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.486662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.494549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.501998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.510295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.525994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.542700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.549270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.557414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.564590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.572452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.579270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.590064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.598001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.604540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.611925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.618453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.640932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.648349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.656642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.722935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44078","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:32:57 up  1:15,  0 user,  load average: 4.45, 3.83, 2.52
	Linux default-k8s-diff-port-589368 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a5dc2a2c436e1f9f3d8eb27bc3137661c117bb7b8da2c7216f28dffee5d5dcab] <==
	I1123 08:32:32.457910       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:32:32.458242       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:32:32.458433       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:32:32.458453       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:32:32.458478       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:32:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:32:32.737367       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:32:32.839427       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:32:32.841766       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:32:32.937297       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:32:33.338009       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:32:33.338048       1 metrics.go:72] Registering metrics
	I1123 08:32:33.338153       1 controller.go:711] "Syncing nftables rules"
	I1123 08:32:42.663632       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:32:42.663707       1 main.go:301] handling current node
	I1123 08:32:52.663939       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:32:52.663981       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dc60785939b1a362e3e5089331eedbf39ad142d35c86869fdc7d3fce1d78e23b] <==
	E1123 08:32:23.362547       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1123 08:32:23.383080       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:32:23.389455       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:23.389540       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:32:23.406201       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:23.407166       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:32:23.566534       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:32:24.186742       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:32:24.190641       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:32:24.190662       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:32:24.743209       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:32:24.784636       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:32:24.954395       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:32:24.961339       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 08:32:24.962436       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:32:24.967026       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:32:25.240360       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:32:25.925589       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:32:25.936111       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:32:25.945499       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:32:30.242689       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:32:31.094194       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:31.098131       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:31.342011       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 08:32:56.146237       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:46684: use of closed network connection
	
	
	==> kube-controller-manager [3ecc6cbf82c2a8ef562bd274ff3a66b868aed57f1b0cefdbfce0aab7741009ea] <==
	I1123 08:32:30.239732       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 08:32:30.239777       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 08:32:30.240144       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:32:30.240176       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:32:30.240259       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:32:30.240382       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:32:30.241284       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 08:32:30.241298       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:32:30.241292       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 08:32:30.241623       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 08:32:30.241712       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:32:30.242103       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:32:30.243896       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:32:30.244438       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 08:32:30.244648       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 08:32:30.244775       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 08:32:30.244829       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 08:32:30.244870       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 08:32:30.248036       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:32:30.251233       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:32:30.254633       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-589368" podCIDRs=["10.244.0.0/24"]
	I1123 08:32:30.269305       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:32:30.276611       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:32:30.281876       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:32:45.191010       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4a23395ddf486046bdb227bec271f83ccfbf543d57d1bcc7ff5336ebe1b583e2] <==
	I1123 08:32:31.956239       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:32:32.022177       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:32:32.122584       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:32:32.122633       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 08:32:32.122808       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:32:32.144909       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:32:32.144982       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:32:32.150499       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:32:32.150879       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:32:32.150914       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:32:32.152874       1 config.go:200] "Starting service config controller"
	I1123 08:32:32.152895       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:32:32.152897       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:32:32.152913       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:32:32.152798       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:32:32.152953       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:32:32.153390       1 config.go:309] "Starting node config controller"
	I1123 08:32:32.153552       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:32:32.153565       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:32:32.253464       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:32:32.253476       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:32:32.253589       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [cc58955607624d4ac28708b41ab0dbf803e89847c669c16f301b0530b31c4350] <==
	E1123 08:32:23.252237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:32:23.252280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:32:23.252314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:32:23.252333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:32:23.252351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:32:23.252385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:32:23.252421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:32:23.252455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:32:23.252468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:32:23.252496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:32:23.252557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:32:23.252555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:32:24.057261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:32:24.081964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:32:24.140368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:32:24.144765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:32:24.235784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:32:24.302802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:32:24.334157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:32:24.379610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:32:24.420776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 08:32:24.421728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:32:24.476450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:32:24.518773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1123 08:32:26.446053       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:32:26 default-k8s-diff-port-589368 kubelet[1452]: E1123 08:32:26.813676    1452 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-589368\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-589368"
	Nov 23 08:32:26 default-k8s-diff-port-589368 kubelet[1452]: E1123 08:32:26.814002    1452 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-589368\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-589368"
	Nov 23 08:32:26 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:26.827953    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-589368" podStartSLOduration=1.827935192 podStartE2EDuration="1.827935192s" podCreationTimestamp="2025-11-23 08:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:26.827855522 +0000 UTC m=+1.151697587" watchObservedRunningTime="2025-11-23 08:32:26.827935192 +0000 UTC m=+1.151777253"
	Nov 23 08:32:26 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:26.840935    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-589368" podStartSLOduration=1.8409122660000001 podStartE2EDuration="1.840912266s" podCreationTimestamp="2025-11-23 08:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:26.840483934 +0000 UTC m=+1.164326019" watchObservedRunningTime="2025-11-23 08:32:26.840912266 +0000 UTC m=+1.164754349"
	Nov 23 08:32:26 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:26.863873    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-589368" podStartSLOduration=1.863844729 podStartE2EDuration="1.863844729s" podCreationTimestamp="2025-11-23 08:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:26.850866175 +0000 UTC m=+1.174708257" watchObservedRunningTime="2025-11-23 08:32:26.863844729 +0000 UTC m=+1.187686819"
	Nov 23 08:32:30 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:30.329809    1452 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:32:30 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:30.330577    1452 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:32:31 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:31.393947    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a07f9b30-2e69-49d0-b6ef-e7b596668c6c-lib-modules\") pod \"kindnet-5jwgt\" (UID: \"a07f9b30-2e69-49d0-b6ef-e7b596668c6c\") " pod="kube-system/kindnet-5jwgt"
	Nov 23 08:32:31 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:31.394010    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7tdp\" (UniqueName: \"kubernetes.io/projected/a07f9b30-2e69-49d0-b6ef-e7b596668c6c-kube-api-access-d7tdp\") pod \"kindnet-5jwgt\" (UID: \"a07f9b30-2e69-49d0-b6ef-e7b596668c6c\") " pod="kube-system/kindnet-5jwgt"
	Nov 23 08:32:31 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:31.394054    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1794c601-0080-4201-8fa7-4b7042de3f70-xtables-lock\") pod \"kube-proxy-gtbbd\" (UID: \"1794c601-0080-4201-8fa7-4b7042de3f70\") " pod="kube-system/kube-proxy-gtbbd"
	Nov 23 08:32:31 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:31.394078    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w92dp\" (UniqueName: \"kubernetes.io/projected/1794c601-0080-4201-8fa7-4b7042de3f70-kube-api-access-w92dp\") pod \"kube-proxy-gtbbd\" (UID: \"1794c601-0080-4201-8fa7-4b7042de3f70\") " pod="kube-system/kube-proxy-gtbbd"
	Nov 23 08:32:31 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:31.394114    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a07f9b30-2e69-49d0-b6ef-e7b596668c6c-xtables-lock\") pod \"kindnet-5jwgt\" (UID: \"a07f9b30-2e69-49d0-b6ef-e7b596668c6c\") " pod="kube-system/kindnet-5jwgt"
	Nov 23 08:32:31 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:31.394166    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1794c601-0080-4201-8fa7-4b7042de3f70-kube-proxy\") pod \"kube-proxy-gtbbd\" (UID: \"1794c601-0080-4201-8fa7-4b7042de3f70\") " pod="kube-system/kube-proxy-gtbbd"
	Nov 23 08:32:31 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:31.394198    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1794c601-0080-4201-8fa7-4b7042de3f70-lib-modules\") pod \"kube-proxy-gtbbd\" (UID: \"1794c601-0080-4201-8fa7-4b7042de3f70\") " pod="kube-system/kube-proxy-gtbbd"
	Nov 23 08:32:31 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:31.394224    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a07f9b30-2e69-49d0-b6ef-e7b596668c6c-cni-cfg\") pod \"kindnet-5jwgt\" (UID: \"a07f9b30-2e69-49d0-b6ef-e7b596668c6c\") " pod="kube-system/kindnet-5jwgt"
	Nov 23 08:32:32 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:32.844829    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gtbbd" podStartSLOduration=1.844804271 podStartE2EDuration="1.844804271s" podCreationTimestamp="2025-11-23 08:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:32.827494465 +0000 UTC m=+7.151336542" watchObservedRunningTime="2025-11-23 08:32:32.844804271 +0000 UTC m=+7.168646354"
	Nov 23 08:32:32 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:32.844989    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-5jwgt" podStartSLOduration=1.844976918 podStartE2EDuration="1.844976918s" podCreationTimestamp="2025-11-23 08:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:32.844434067 +0000 UTC m=+7.168276148" watchObservedRunningTime="2025-11-23 08:32:32.844976918 +0000 UTC m=+7.168818997"
	Nov 23 08:32:42 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:42.715938    1452 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:32:42 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:42.781372    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfjlp\" (UniqueName: \"kubernetes.io/projected/ba94dfd6-117a-458e-a2bc-56d07e4ece76-kube-api-access-vfjlp\") pod \"storage-provisioner\" (UID: \"ba94dfd6-117a-458e-a2bc-56d07e4ece76\") " pod="kube-system/storage-provisioner"
	Nov 23 08:32:42 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:42.781435    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ba94dfd6-117a-458e-a2bc-56d07e4ece76-tmp\") pod \"storage-provisioner\" (UID: \"ba94dfd6-117a-458e-a2bc-56d07e4ece76\") " pod="kube-system/storage-provisioner"
	Nov 23 08:32:42 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:42.882773    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad0a1f7b-df3d-473f-8add-c1280351efcf-config-volume\") pod \"coredns-66bc5c9577-6xpjl\" (UID: \"ad0a1f7b-df3d-473f-8add-c1280351efcf\") " pod="kube-system/coredns-66bc5c9577-6xpjl"
	Nov 23 08:32:42 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:42.882855    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5m5g\" (UniqueName: \"kubernetes.io/projected/ad0a1f7b-df3d-473f-8add-c1280351efcf-kube-api-access-h5m5g\") pod \"coredns-66bc5c9577-6xpjl\" (UID: \"ad0a1f7b-df3d-473f-8add-c1280351efcf\") " pod="kube-system/coredns-66bc5c9577-6xpjl"
	Nov 23 08:32:43 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:43.860492    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6xpjl" podStartSLOduration=12.860469746 podStartE2EDuration="12.860469746s" podCreationTimestamp="2025-11-23 08:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:43.860438406 +0000 UTC m=+18.184280488" watchObservedRunningTime="2025-11-23 08:32:43.860469746 +0000 UTC m=+18.184311829"
	Nov 23 08:32:43 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:43.878826    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.878554098 podStartE2EDuration="11.878554098s" podCreationTimestamp="2025-11-23 08:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:43.878407039 +0000 UTC m=+18.202249120" watchObservedRunningTime="2025-11-23 08:32:43.878554098 +0000 UTC m=+18.202396180"
	Nov 23 08:32:46 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:46.109667    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmmzf\" (UniqueName: \"kubernetes.io/projected/f383f151-7f4a-4182-acac-584b4e100ec0-kube-api-access-pmmzf\") pod \"busybox\" (UID: \"f383f151-7f4a-4182-acac-584b4e100ec0\") " pod="default/busybox"
	
	
	==> storage-provisioner [be2a8d916be6abca8d80797137b844a6c5a7660c10516a7b8fa04cca64214e28] <==
	I1123 08:32:43.290637       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:32:43.304541       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:32:43.304603       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:32:43.309030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:43.315648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:32:43.315838       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:32:43.316125       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-589368_7855aaf2-a1e5-4762-963d-b81eb7235882!
	I1123 08:32:43.316447       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"868856f0-1e0e-4dcc-904e-01ec9763f97a", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-589368_7855aaf2-a1e5-4762-963d-b81eb7235882 became leader
	W1123 08:32:43.319769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:43.324822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:32:43.416345       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-589368_7855aaf2-a1e5-4762-963d-b81eb7235882!
	W1123 08:32:45.329001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:45.333478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:47.336701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:47.341454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:49.345103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:49.349572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:51.353136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:51.357483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:53.360478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:53.365421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:55.369147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:55.373476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:57.377091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:57.381851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-589368 -n default-k8s-diff-port-589368
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-589368 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-589368
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-589368:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7026f23687d99eff540d35c4c568c12f474daa74225eff3a0d9720e4ba1650bd",
	        "Created": "2025-11-23T08:32:08.28959015Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 327679,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:32:08.338758514Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/7026f23687d99eff540d35c4c568c12f474daa74225eff3a0d9720e4ba1650bd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7026f23687d99eff540d35c4c568c12f474daa74225eff3a0d9720e4ba1650bd/hostname",
	        "HostsPath": "/var/lib/docker/containers/7026f23687d99eff540d35c4c568c12f474daa74225eff3a0d9720e4ba1650bd/hosts",
	        "LogPath": "/var/lib/docker/containers/7026f23687d99eff540d35c4c568c12f474daa74225eff3a0d9720e4ba1650bd/7026f23687d99eff540d35c4c568c12f474daa74225eff3a0d9720e4ba1650bd-json.log",
	        "Name": "/default-k8s-diff-port-589368",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-589368:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-589368",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7026f23687d99eff540d35c4c568c12f474daa74225eff3a0d9720e4ba1650bd",
	                "LowerDir": "/var/lib/docker/overlay2/b82b1b3b45ed1cf02b1edb2ec06e7a053c7ffb220085780dbd268e82558ca976-init/diff:/var/lib/docker/overlay2/f8ae64c4d7d1e12e69b7d69a01d34a96c2f353aeac48a9b438b028f010c32149/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b82b1b3b45ed1cf02b1edb2ec06e7a053c7ffb220085780dbd268e82558ca976/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b82b1b3b45ed1cf02b1edb2ec06e7a053c7ffb220085780dbd268e82558ca976/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b82b1b3b45ed1cf02b1edb2ec06e7a053c7ffb220085780dbd268e82558ca976/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-589368",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-589368/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-589368",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-589368",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-589368",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d32cd6a3fbe376f6b89280b01aae7dae826ab2160b61401aa4f3e15f85163475",
	            "SandboxKey": "/var/run/docker/netns/d32cd6a3fbe3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-589368": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "569c49b64783e7db5048373c78e482a2033020bba3d0ef79a674f75891c7df96",
	                    "EndpointID": "85a80aa4e222e853e9d89deeb5f807e1e52aa1d65fefcb0995901499a143d252",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "da:13:2a:65:fb:31",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-589368",
	                        "7026f23687d9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-589368 -n default-k8s-diff-port-589368
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-589368 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-589368 logs -n 25: (1.046474291s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p bridge-366757 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cri-dockerd --version                                                                                                                                                                                                         │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo containerd config dump                                                                                                                                                                                                        │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-366757 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ ssh     │ -p bridge-366757 sudo crio config                                                                                                                                                                                                                   │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:31 UTC │
	│ delete  │ -p bridge-366757                                                                                                                                                                                                                                    │ bridge-366757                │ jenkins │ v1.37.0 │ 23 Nov 25 08:31 UTC │ 23 Nov 25 08:32 UTC │
	│ delete  │ -p disable-driver-mounts-900754                                                                                                                                                                                                                     │ disable-driver-mounts-900754 │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ start   │ -p default-k8s-diff-port-589368 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-589368 │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-644335 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ stop    │ -p old-k8s-version-644335 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-644335 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ start   │ -p old-k8s-version-644335 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-644335       │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-329854 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-329854           │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ stop    │ -p embed-certs-329854 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-329854           │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ addons  │ enable metrics-server -p no-preload-073500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-073500            │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ stop    │ -p no-preload-073500 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-073500            │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-329854 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-329854           │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │ 23 Nov 25 08:32 UTC │
	│ start   │ -p embed-certs-329854 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-329854           │ jenkins │ v1.37.0 │ 23 Nov 25 08:32 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:32:56
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:32:56.586822  340873 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:32:56.587167  340873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:32:56.587174  340873 out.go:374] Setting ErrFile to fd 2...
	I1123 08:32:56.587180  340873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:32:56.587481  340873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
	I1123 08:32:56.588083  340873 out.go:368] Setting JSON to false
	I1123 08:32:56.589406  340873 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4515,"bootTime":1763882262,"procs":334,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:32:56.589517  340873 start.go:143] virtualization: kvm guest
	I1123 08:32:56.591492  340873 out.go:179] * [embed-certs-329854] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:32:56.593261  340873 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:32:56.593244  340873 notify.go:221] Checking for updates...
	I1123 08:32:56.594444  340873 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:32:56.595707  340873 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig
	I1123 08:32:56.596921  340873 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube
	I1123 08:32:56.598343  340873 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:32:56.599978  340873 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:32:56.602212  340873 config.go:182] Loaded profile config "embed-certs-329854": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:32:56.602813  340873 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:32:56.630241  340873 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:32:56.630350  340873 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:32:56.703982  340873 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-23 08:32:56.693794269 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:32:56.704080  340873 docker.go:319] overlay module found
	I1123 08:32:56.706048  340873 out.go:179] * Using the docker driver based on existing profile
	I1123 08:32:56.707342  340873 start.go:309] selected driver: docker
	I1123 08:32:56.707362  340873 start.go:927] validating driver "docker" against &{Name:embed-certs-329854 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-329854 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:32:56.707491  340873 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:32:56.708395  340873 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:32:56.769047  340873 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-23 08:32:56.758046951 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:32:56.769362  340873 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:32:56.769394  340873 cni.go:84] Creating CNI manager for ""
	I1123 08:32:56.769448  340873 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:32:56.769488  340873 start.go:353] cluster config:
	{Name:embed-certs-329854 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-329854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:32:56.771358  340873 out.go:179] * Starting "embed-certs-329854" primary control-plane node in "embed-certs-329854" cluster
	I1123 08:32:56.772591  340873 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:32:56.773958  340873 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:32:56.775078  340873 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:32:56.775119  340873 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1123 08:32:56.775132  340873 cache.go:65] Caching tarball of preloaded images
	I1123 08:32:56.775176  340873 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:32:56.775274  340873 preload.go:238] Found /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 08:32:56.775301  340873 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 08:32:56.775456  340873 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/embed-certs-329854/config.json ...
	I1123 08:32:56.796662  340873 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:32:56.796682  340873 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:32:56.796700  340873 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:32:56.796728  340873 start.go:360] acquireMachinesLock for embed-certs-329854: {Name:mkd6cd679138405000c5872d1142433ff5417e1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:32:56.796792  340873 start.go:364] duration metric: took 37.652µs to acquireMachinesLock for "embed-certs-329854"
	I1123 08:32:56.796806  340873 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:32:56.796814  340873 fix.go:54] fixHost starting: 
	I1123 08:32:56.797005  340873 cli_runner.go:164] Run: docker container inspect embed-certs-329854 --format={{.State.Status}}
	I1123 08:32:56.814543  340873 fix.go:112] recreateIfNeeded on embed-certs-329854: state=Stopped err=<nil>
	W1123 08:32:56.814603  340873 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	5ec957a59827f       56cc512116c8f       10 seconds ago      Running             busybox                   0                   333e43aa76b87       busybox                                                default
	d1c2813ca8219       52546a367cc9e       15 seconds ago      Running             coredns                   0                   22274ef49cb25       coredns-66bc5c9577-6xpjl                               kube-system
	be2a8d916be6a       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   266ec11be519c       storage-provisioner                                    kube-system
	a5dc2a2c436e1       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   0698256e676e6       kindnet-5jwgt                                          kube-system
	4a23395ddf486       fc25172553d79       27 seconds ago      Running             kube-proxy                0                   bd0744886815a       kube-proxy-gtbbd                                       kube-system
	3ecc6cbf82c2a       c80c8dbafe7dd       37 seconds ago      Running             kube-controller-manager   0                   ca1ed7a60071f       kube-controller-manager-default-k8s-diff-port-589368   kube-system
	dc60785939b1a       c3994bc696102       37 seconds ago      Running             kube-apiserver            0                   1745e5b26e36c       kube-apiserver-default-k8s-diff-port-589368            kube-system
	e493d61ab271c       5f1f5298c888d       37 seconds ago      Running             etcd                      0                   e1c6b4821e8c4       etcd-default-k8s-diff-port-589368                      kube-system
	cc58955607624       7dd6aaa1717ab       37 seconds ago      Running             kube-scheduler            0                   a05a3bd236cb4       kube-scheduler-default-k8s-diff-port-589368            kube-system
	
	
	==> containerd <==
	Nov 23 08:32:43 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:43.197316859Z" level=info msg="StartContainer for \"be2a8d916be6abca8d80797137b844a6c5a7660c10516a7b8fa04cca64214e28\""
	Nov 23 08:32:43 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:43.198417736Z" level=info msg="connecting to shim be2a8d916be6abca8d80797137b844a6c5a7660c10516a7b8fa04cca64214e28" address="unix:///run/containerd/s/fae407416a19ab1e062250e5dc10470fc1c995379aae77a40c595dc292637852" protocol=ttrpc version=3
	Nov 23 08:32:43 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:43.205223714Z" level=info msg="CreateContainer within sandbox \"22274ef49cb25ed8d724c0c84dbdd19576fdcf1e73aa200761a84ccf31a4e506\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 23 08:32:43 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:43.212716251Z" level=info msg="Container d1c2813ca82197c6dee0ef13df63254e58416a461c7e36203f4207a0ffdd5d2b: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:32:43 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:43.219589132Z" level=info msg="CreateContainer within sandbox \"22274ef49cb25ed8d724c0c84dbdd19576fdcf1e73aa200761a84ccf31a4e506\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d1c2813ca82197c6dee0ef13df63254e58416a461c7e36203f4207a0ffdd5d2b\""
	Nov 23 08:32:43 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:43.220151988Z" level=info msg="StartContainer for \"d1c2813ca82197c6dee0ef13df63254e58416a461c7e36203f4207a0ffdd5d2b\""
	Nov 23 08:32:43 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:43.221291339Z" level=info msg="connecting to shim d1c2813ca82197c6dee0ef13df63254e58416a461c7e36203f4207a0ffdd5d2b" address="unix:///run/containerd/s/2fcb61eee781f57545d8e23626d973c97d5e7c1c62fe7e6d865cb4007cdac44d" protocol=ttrpc version=3
	Nov 23 08:32:43 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:43.282484784Z" level=info msg="StartContainer for \"be2a8d916be6abca8d80797137b844a6c5a7660c10516a7b8fa04cca64214e28\" returns successfully"
	Nov 23 08:32:43 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:43.297672596Z" level=info msg="StartContainer for \"d1c2813ca82197c6dee0ef13df63254e58416a461c7e36203f4207a0ffdd5d2b\" returns successfully"
	Nov 23 08:32:46 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:46.349101924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f383f151-7f4a-4182-acac-584b4e100ec0,Namespace:default,Attempt:0,}"
	Nov 23 08:32:46 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:46.389463783Z" level=info msg="connecting to shim 333e43aa76b879b767587517de8a817691152ff13001c3b8ba05c6281632b2e6" address="unix:///run/containerd/s/8629df459abdaa7ab30d8d9e344079a8759f3d27182860b3296f16df90e974ff" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:32:46 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:46.522038514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f383f151-7f4a-4182-acac-584b4e100ec0,Namespace:default,Attempt:0,} returns sandbox id \"333e43aa76b879b767587517de8a817691152ff13001c3b8ba05c6281632b2e6\""
	Nov 23 08:32:46 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:46.526698888Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.797094342Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.798496788Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396646"
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.800179245Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.802496812Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.803858276Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.277090751s"
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.803918680Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.813233313Z" level=info msg="CreateContainer within sandbox \"333e43aa76b879b767587517de8a817691152ff13001c3b8ba05c6281632b2e6\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.821430687Z" level=info msg="Container 5ec957a59827f402808edee55b06bd982e451b44cd873da3b7bdda936015cd68: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.835926315Z" level=info msg="CreateContainer within sandbox \"333e43aa76b879b767587517de8a817691152ff13001c3b8ba05c6281632b2e6\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"5ec957a59827f402808edee55b06bd982e451b44cd873da3b7bdda936015cd68\""
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.837479508Z" level=info msg="StartContainer for \"5ec957a59827f402808edee55b06bd982e451b44cd873da3b7bdda936015cd68\""
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.839881557Z" level=info msg="connecting to shim 5ec957a59827f402808edee55b06bd982e451b44cd873da3b7bdda936015cd68" address="unix:///run/containerd/s/8629df459abdaa7ab30d8d9e344079a8759f3d27182860b3296f16df90e974ff" protocol=ttrpc version=3
	Nov 23 08:32:48 default-k8s-diff-port-589368 containerd[667]: time="2025-11-23T08:32:48.921322009Z" level=info msg="StartContainer for \"5ec957a59827f402808edee55b06bd982e451b44cd873da3b7bdda936015cd68\" returns successfully"
	
	
	==> coredns [d1c2813ca82197c6dee0ef13df63254e58416a461c7e36203f4207a0ffdd5d2b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47837 - 65458 "HINFO IN 1778367819640139400.7981754308843983188. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.074622053s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-589368
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-589368
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=default-k8s-diff-port-589368
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_32_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:32:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-589368
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:32:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:32:56 +0000   Sun, 23 Nov 2025 08:32:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:32:56 +0000   Sun, 23 Nov 2025 08:32:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:32:56 +0000   Sun, 23 Nov 2025 08:32:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:32:56 +0000   Sun, 23 Nov 2025 08:32:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-589368
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                ac8512a9-bb00-4f66-aa97-e9e94775edd0
	  Boot ID:                    5380b858-5e3f-4ab2-b78d-8704cd56a682
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-6xpjl                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-default-k8s-diff-port-589368                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-5jwgt                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-default-k8s-diff-port-589368             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-589368    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-gtbbd                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-default-k8s-diff-port-589368             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 34s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  34s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  34s   kubelet          Node default-k8s-diff-port-589368 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s   kubelet          Node default-k8s-diff-port-589368 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s   kubelet          Node default-k8s-diff-port-589368 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node default-k8s-diff-port-589368 event: Registered Node default-k8s-diff-port-589368 in Controller
	  Normal  NodeReady                17s   kubelet          Node default-k8s-diff-port-589368 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 7d 09 6f 5f 2b 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 d4 5e e6 42 49 08 06
	[ +11.373766] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 a4 f8 6b 15 37 08 06
	[  +0.013916] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 7c 96 ae 15 dc 08 06
	[ +40.470104] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 69 b6 fd a9 36 08 06
	[  +0.167388] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 d3 04 3f 4c f4 08 06
	[  +2.400864] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 01 20 fe a4 35 08 06
	[  +0.000386] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 7c 96 ae 15 dc 08 06
	[  +5.210763] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 68 6e 21 c9 1f 08 06
	[Nov23 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a c0 03 9d 77 98 08 06
	[  +0.000409] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 68 6e 21 c9 1f 08 06
	[ +19.602508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 9b 99 36 e6 f4 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e 69 b6 fd a9 36 08 06
	
	
	==> etcd [e493d61ab271cc67905737050421c6c6bf56e68a838e8ed6cf3dde6327c413d6] <==
	{"level":"warn","ts":"2025-11-23T08:32:22.436131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.444406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.454623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.465381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.478339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.486662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.494549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.501998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.510295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.525994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.542700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.549270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.557414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.564590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.572452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.579270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.590064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.598001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.604540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.611925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.618453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.640932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.648349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.656642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:32:22.722935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44078","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:32:59 up  1:15,  0 user,  load average: 4.09, 3.76, 2.51
	Linux default-k8s-diff-port-589368 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a5dc2a2c436e1f9f3d8eb27bc3137661c117bb7b8da2c7216f28dffee5d5dcab] <==
	I1123 08:32:32.457910       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:32:32.458242       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:32:32.458433       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:32:32.458453       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:32:32.458478       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:32:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:32:32.737367       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:32:32.839427       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:32:32.841766       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:32:32.937297       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:32:33.338009       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:32:33.338048       1 metrics.go:72] Registering metrics
	I1123 08:32:33.338153       1 controller.go:711] "Syncing nftables rules"
	I1123 08:32:42.663632       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:32:42.663707       1 main.go:301] handling current node
	I1123 08:32:52.663939       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:32:52.663981       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dc60785939b1a362e3e5089331eedbf39ad142d35c86869fdc7d3fce1d78e23b] <==
	E1123 08:32:23.362547       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1123 08:32:23.383080       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:32:23.389455       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:23.389540       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:32:23.406201       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:23.407166       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:32:23.566534       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:32:24.186742       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:32:24.190641       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:32:24.190662       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:32:24.743209       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:32:24.784636       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:32:24.954395       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:32:24.961339       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 08:32:24.962436       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:32:24.967026       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:32:25.240360       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:32:25.925589       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:32:25.936111       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:32:25.945499       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:32:30.242689       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:32:31.094194       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:31.098131       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:32:31.342011       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 08:32:56.146237       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:46684: use of closed network connection
	
	
	==> kube-controller-manager [3ecc6cbf82c2a8ef562bd274ff3a66b868aed57f1b0cefdbfce0aab7741009ea] <==
	I1123 08:32:30.239732       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 08:32:30.239777       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 08:32:30.240144       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:32:30.240176       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:32:30.240259       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:32:30.240382       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:32:30.241284       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 08:32:30.241298       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:32:30.241292       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 08:32:30.241623       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 08:32:30.241712       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:32:30.242103       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:32:30.243896       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:32:30.244438       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 08:32:30.244648       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 08:32:30.244775       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 08:32:30.244829       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 08:32:30.244870       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 08:32:30.248036       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:32:30.251233       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:32:30.254633       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-589368" podCIDRs=["10.244.0.0/24"]
	I1123 08:32:30.269305       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:32:30.276611       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:32:30.281876       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:32:45.191010       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4a23395ddf486046bdb227bec271f83ccfbf543d57d1bcc7ff5336ebe1b583e2] <==
	I1123 08:32:31.956239       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:32:32.022177       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:32:32.122584       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:32:32.122633       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 08:32:32.122808       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:32:32.144909       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:32:32.144982       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:32:32.150499       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:32:32.150879       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:32:32.150914       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:32:32.152874       1 config.go:200] "Starting service config controller"
	I1123 08:32:32.152895       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:32:32.152897       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:32:32.152913       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:32:32.152798       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:32:32.152953       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:32:32.153390       1 config.go:309] "Starting node config controller"
	I1123 08:32:32.153552       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:32:32.153565       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:32:32.253464       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:32:32.253476       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:32:32.253589       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [cc58955607624d4ac28708b41ab0dbf803e89847c669c16f301b0530b31c4350] <==
	E1123 08:32:23.252237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:32:23.252280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:32:23.252314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:32:23.252333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:32:23.252351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:32:23.252385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:32:23.252421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:32:23.252455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:32:23.252468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:32:23.252496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:32:23.252557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:32:23.252555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:32:24.057261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:32:24.081964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:32:24.140368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:32:24.144765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:32:24.235784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:32:24.302802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:32:24.334157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:32:24.379610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:32:24.420776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 08:32:24.421728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:32:24.476450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:32:24.518773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1123 08:32:26.446053       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:32:26 default-k8s-diff-port-589368 kubelet[1452]: E1123 08:32:26.813676    1452 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-589368\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-589368"
	Nov 23 08:32:26 default-k8s-diff-port-589368 kubelet[1452]: E1123 08:32:26.814002    1452 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-589368\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-589368"
	Nov 23 08:32:26 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:26.827953    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-589368" podStartSLOduration=1.827935192 podStartE2EDuration="1.827935192s" podCreationTimestamp="2025-11-23 08:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:26.827855522 +0000 UTC m=+1.151697587" watchObservedRunningTime="2025-11-23 08:32:26.827935192 +0000 UTC m=+1.151777253"
	Nov 23 08:32:26 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:26.840935    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-589368" podStartSLOduration=1.8409122660000001 podStartE2EDuration="1.840912266s" podCreationTimestamp="2025-11-23 08:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:26.840483934 +0000 UTC m=+1.164326019" watchObservedRunningTime="2025-11-23 08:32:26.840912266 +0000 UTC m=+1.164754349"
	Nov 23 08:32:26 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:26.863873    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-589368" podStartSLOduration=1.863844729 podStartE2EDuration="1.863844729s" podCreationTimestamp="2025-11-23 08:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:26.850866175 +0000 UTC m=+1.174708257" watchObservedRunningTime="2025-11-23 08:32:26.863844729 +0000 UTC m=+1.187686819"
	Nov 23 08:32:30 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:30.329809    1452 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:32:30 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:30.330577    1452 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:32:31 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:31.393947    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a07f9b30-2e69-49d0-b6ef-e7b596668c6c-lib-modules\") pod \"kindnet-5jwgt\" (UID: \"a07f9b30-2e69-49d0-b6ef-e7b596668c6c\") " pod="kube-system/kindnet-5jwgt"
	Nov 23 08:32:31 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:31.394010    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7tdp\" (UniqueName: \"kubernetes.io/projected/a07f9b30-2e69-49d0-b6ef-e7b596668c6c-kube-api-access-d7tdp\") pod \"kindnet-5jwgt\" (UID: \"a07f9b30-2e69-49d0-b6ef-e7b596668c6c\") " pod="kube-system/kindnet-5jwgt"
	Nov 23 08:32:31 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:31.394054    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1794c601-0080-4201-8fa7-4b7042de3f70-xtables-lock\") pod \"kube-proxy-gtbbd\" (UID: \"1794c601-0080-4201-8fa7-4b7042de3f70\") " pod="kube-system/kube-proxy-gtbbd"
	Nov 23 08:32:31 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:31.394078    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w92dp\" (UniqueName: \"kubernetes.io/projected/1794c601-0080-4201-8fa7-4b7042de3f70-kube-api-access-w92dp\") pod \"kube-proxy-gtbbd\" (UID: \"1794c601-0080-4201-8fa7-4b7042de3f70\") " pod="kube-system/kube-proxy-gtbbd"
	Nov 23 08:32:31 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:31.394114    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a07f9b30-2e69-49d0-b6ef-e7b596668c6c-xtables-lock\") pod \"kindnet-5jwgt\" (UID: \"a07f9b30-2e69-49d0-b6ef-e7b596668c6c\") " pod="kube-system/kindnet-5jwgt"
	Nov 23 08:32:31 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:31.394166    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1794c601-0080-4201-8fa7-4b7042de3f70-kube-proxy\") pod \"kube-proxy-gtbbd\" (UID: \"1794c601-0080-4201-8fa7-4b7042de3f70\") " pod="kube-system/kube-proxy-gtbbd"
	Nov 23 08:32:31 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:31.394198    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1794c601-0080-4201-8fa7-4b7042de3f70-lib-modules\") pod \"kube-proxy-gtbbd\" (UID: \"1794c601-0080-4201-8fa7-4b7042de3f70\") " pod="kube-system/kube-proxy-gtbbd"
	Nov 23 08:32:31 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:31.394224    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a07f9b30-2e69-49d0-b6ef-e7b596668c6c-cni-cfg\") pod \"kindnet-5jwgt\" (UID: \"a07f9b30-2e69-49d0-b6ef-e7b596668c6c\") " pod="kube-system/kindnet-5jwgt"
	Nov 23 08:32:32 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:32.844829    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gtbbd" podStartSLOduration=1.844804271 podStartE2EDuration="1.844804271s" podCreationTimestamp="2025-11-23 08:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:32.827494465 +0000 UTC m=+7.151336542" watchObservedRunningTime="2025-11-23 08:32:32.844804271 +0000 UTC m=+7.168646354"
	Nov 23 08:32:32 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:32.844989    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-5jwgt" podStartSLOduration=1.844976918 podStartE2EDuration="1.844976918s" podCreationTimestamp="2025-11-23 08:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:32.844434067 +0000 UTC m=+7.168276148" watchObservedRunningTime="2025-11-23 08:32:32.844976918 +0000 UTC m=+7.168818997"
	Nov 23 08:32:42 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:42.715938    1452 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:32:42 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:42.781372    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfjlp\" (UniqueName: \"kubernetes.io/projected/ba94dfd6-117a-458e-a2bc-56d07e4ece76-kube-api-access-vfjlp\") pod \"storage-provisioner\" (UID: \"ba94dfd6-117a-458e-a2bc-56d07e4ece76\") " pod="kube-system/storage-provisioner"
	Nov 23 08:32:42 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:42.781435    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ba94dfd6-117a-458e-a2bc-56d07e4ece76-tmp\") pod \"storage-provisioner\" (UID: \"ba94dfd6-117a-458e-a2bc-56d07e4ece76\") " pod="kube-system/storage-provisioner"
	Nov 23 08:32:42 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:42.882773    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad0a1f7b-df3d-473f-8add-c1280351efcf-config-volume\") pod \"coredns-66bc5c9577-6xpjl\" (UID: \"ad0a1f7b-df3d-473f-8add-c1280351efcf\") " pod="kube-system/coredns-66bc5c9577-6xpjl"
	Nov 23 08:32:42 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:42.882855    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5m5g\" (UniqueName: \"kubernetes.io/projected/ad0a1f7b-df3d-473f-8add-c1280351efcf-kube-api-access-h5m5g\") pod \"coredns-66bc5c9577-6xpjl\" (UID: \"ad0a1f7b-df3d-473f-8add-c1280351efcf\") " pod="kube-system/coredns-66bc5c9577-6xpjl"
	Nov 23 08:32:43 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:43.860492    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6xpjl" podStartSLOduration=12.860469746 podStartE2EDuration="12.860469746s" podCreationTimestamp="2025-11-23 08:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:43.860438406 +0000 UTC m=+18.184280488" watchObservedRunningTime="2025-11-23 08:32:43.860469746 +0000 UTC m=+18.184311829"
	Nov 23 08:32:43 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:43.878826    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.878554098 podStartE2EDuration="11.878554098s" podCreationTimestamp="2025-11-23 08:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:32:43.878407039 +0000 UTC m=+18.202249120" watchObservedRunningTime="2025-11-23 08:32:43.878554098 +0000 UTC m=+18.202396180"
	Nov 23 08:32:46 default-k8s-diff-port-589368 kubelet[1452]: I1123 08:32:46.109667    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmmzf\" (UniqueName: \"kubernetes.io/projected/f383f151-7f4a-4182-acac-584b4e100ec0-kube-api-access-pmmzf\") pod \"busybox\" (UID: \"f383f151-7f4a-4182-acac-584b4e100ec0\") " pod="default/busybox"
	
	
	==> storage-provisioner [be2a8d916be6abca8d80797137b844a6c5a7660c10516a7b8fa04cca64214e28] <==
	I1123 08:32:43.304603       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:32:43.309030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:43.315648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:32:43.315838       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:32:43.316125       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-589368_7855aaf2-a1e5-4762-963d-b81eb7235882!
	I1123 08:32:43.316447       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"868856f0-1e0e-4dcc-904e-01ec9763f97a", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-589368_7855aaf2-a1e5-4762-963d-b81eb7235882 became leader
	W1123 08:32:43.319769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:43.324822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:32:43.416345       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-589368_7855aaf2-a1e5-4762-963d-b81eb7235882!
	W1123 08:32:45.329001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:45.333478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:47.336701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:47.341454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:49.345103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:49.349572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:51.353136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:51.357483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:53.360478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:53.365421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:55.369147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:55.373476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:57.377091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:57.381851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:59.385423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:32:59.391253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-589368 -n default-k8s-diff-port-589368
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-589368 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (14.03s)

                                                
                                    

Test pass (303/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 12.21
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 10.95
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.41
21 TestBinaryMirror 0.82
22 TestOffline 53.39
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 123.31
29 TestAddons/serial/Volcano 38.16
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 9.45
35 TestAddons/parallel/Registry 14.73
36 TestAddons/parallel/RegistryCreds 0.71
37 TestAddons/parallel/Ingress 20.55
38 TestAddons/parallel/InspektorGadget 10.64
39 TestAddons/parallel/MetricsServer 5.65
41 TestAddons/parallel/CSI 56.56
42 TestAddons/parallel/Headlamp 17.56
43 TestAddons/parallel/CloudSpanner 5.5
44 TestAddons/parallel/LocalPath 52.61
45 TestAddons/parallel/NvidiaDevicePlugin 5.52
46 TestAddons/parallel/Yakd 10.68
47 TestAddons/parallel/AmdGpuDevicePlugin 5.51
48 TestAddons/StoppedEnableDisable 12.43
49 TestCertOptions 32.4
50 TestCertExpiration 212.95
52 TestForceSystemdFlag 26.7
53 TestForceSystemdEnv 33.69
54 TestDockerEnvContainerd 38.67
58 TestErrorSpam/setup 20.03
59 TestErrorSpam/start 0.65
60 TestErrorSpam/status 0.96
61 TestErrorSpam/pause 1.45
62 TestErrorSpam/unpause 1.52
63 TestErrorSpam/stop 1.5
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 37.64
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.94
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.1
75 TestFunctional/serial/CacheCmd/cache/add_local 1.89
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.57
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 43.11
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.23
86 TestFunctional/serial/LogsFileCmd 1.24
87 TestFunctional/serial/InvalidService 4.31
89 TestFunctional/parallel/ConfigCmd 0.51
90 TestFunctional/parallel/DashboardCmd 12.43
91 TestFunctional/parallel/DryRun 0.38
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 1.07
97 TestFunctional/parallel/ServiceCmdConnect 8.74
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 31.23
101 TestFunctional/parallel/SSHCmd 0.59
102 TestFunctional/parallel/CpCmd 1.97
103 TestFunctional/parallel/MySQL 19.05
104 TestFunctional/parallel/FileSync 0.33
105 TestFunctional/parallel/CertSync 1.99
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.7
113 TestFunctional/parallel/License 0.43
114 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
115 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
116 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
117 TestFunctional/parallel/Version/short 0.08
118 TestFunctional/parallel/Version/components 0.55
119 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
120 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
121 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
122 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
123 TestFunctional/parallel/ImageCommands/ImageBuild 4.03
124 TestFunctional/parallel/ImageCommands/Setup 1.83
125 TestFunctional/parallel/ServiceCmd/DeployApp 16.15
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.14
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 17.21
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.24
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.99
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.68
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
138 TestFunctional/parallel/ServiceCmd/List 0.55
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
141 TestFunctional/parallel/ServiceCmd/Format 0.35
142 TestFunctional/parallel/ServiceCmd/URL 0.37
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
150 TestFunctional/parallel/ProfileCmd/profile_list 0.45
151 TestFunctional/parallel/MountCmd/any-port 8.05
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
153 TestFunctional/parallel/MountCmd/specific-port 1.86
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.99
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 125.22
163 TestMultiControlPlane/serial/DeployApp 5.38
164 TestMultiControlPlane/serial/PingHostFromPods 1.16
165 TestMultiControlPlane/serial/AddWorkerNode 25.87
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.92
168 TestMultiControlPlane/serial/CopyFile 17.69
169 TestMultiControlPlane/serial/StopSecondaryNode 12.76
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.62
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.94
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 99.27
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.33
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.71
176 TestMultiControlPlane/serial/StopCluster 36.16
177 TestMultiControlPlane/serial/RestartCluster 57.06
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.73
179 TestMultiControlPlane/serial/AddSecondaryNode 38.5
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.94
185 TestJSONOutput/start/Command 41.38
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.74
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.59
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.87
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 37.3
211 TestKicCustomNetwork/use_default_bridge_network 26.07
212 TestKicExistingNetwork 23.55
213 TestKicCustomSubnet 26.94
214 TestKicStaticIP 27.37
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 52.32
219 TestMountStart/serial/StartWithMountFirst 7.45
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 4.5
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.68
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.26
226 TestMountStart/serial/RestartStopped 7.7
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 65.06
231 TestMultiNode/serial/DeployApp2Nodes 5.08
232 TestMultiNode/serial/PingHostFrom2Pods 0.78
233 TestMultiNode/serial/AddNode 25.29
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.68
236 TestMultiNode/serial/CopyFile 10
237 TestMultiNode/serial/StopNode 2.32
238 TestMultiNode/serial/StartAfterStop 6.93
239 TestMultiNode/serial/RestartKeepsNodes 69.53
240 TestMultiNode/serial/DeleteNode 5.27
241 TestMultiNode/serial/StopMultiNode 24.01
242 TestMultiNode/serial/RestartMultiNode 44.16
243 TestMultiNode/serial/ValidateNameConflict 22.27
248 TestPreload 112.1
250 TestScheduledStopUnix 99.09
253 TestInsufficientStorage 12.26
254 TestRunningBinaryUpgrade 45.16
256 TestKubernetesUpgrade 325.48
257 TestMissingContainerUpgrade 117.01
259 TestPause/serial/Start 53.9
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
262 TestNoKubernetes/serial/StartWithK8s 33.21
263 TestNoKubernetes/serial/StartWithStopK8s 16.78
264 TestNoKubernetes/serial/Start 7.44
265 TestPause/serial/SecondStartNoReconfiguration 7.22
266 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
268 TestNoKubernetes/serial/ProfileList 2.53
269 TestNoKubernetes/serial/Stop 1.3
270 TestPause/serial/Pause 0.85
271 TestNoKubernetes/serial/StartNoArgs 7.09
272 TestPause/serial/VerifyStatus 0.37
273 TestPause/serial/Unpause 0.7
274 TestPause/serial/PauseAgain 0.79
275 TestPause/serial/DeletePaused 2.77
276 TestPause/serial/VerifyDeletedResources 0.8
277 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
278 TestStoppedBinaryUpgrade/Setup 2.73
279 TestStoppedBinaryUpgrade/Upgrade 97.32
280 TestStoppedBinaryUpgrade/MinikubeLogs 1.22
288 TestNetworkPlugins/group/false 3.52
299 TestNetworkPlugins/group/auto/Start 43.52
300 TestNetworkPlugins/group/kindnet/Start 42.45
301 TestNetworkPlugins/group/auto/KubeletFlags 0.34
302 TestNetworkPlugins/group/auto/NetCatPod 8.22
303 TestNetworkPlugins/group/calico/Start 53.29
304 TestNetworkPlugins/group/auto/DNS 0.17
305 TestNetworkPlugins/group/auto/Localhost 0.15
306 TestNetworkPlugins/group/auto/HairPin 0.15
307 TestNetworkPlugins/group/custom-flannel/Start 48.92
308 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
309 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
310 TestNetworkPlugins/group/kindnet/NetCatPod 9.23
311 TestNetworkPlugins/group/kindnet/DNS 0.15
312 TestNetworkPlugins/group/kindnet/Localhost 0.13
313 TestNetworkPlugins/group/kindnet/HairPin 0.13
314 TestNetworkPlugins/group/calico/ControllerPod 6.01
315 TestNetworkPlugins/group/calico/KubeletFlags 0.35
316 TestNetworkPlugins/group/calico/NetCatPod 9.32
317 TestNetworkPlugins/group/enable-default-cni/Start 64.2
318 TestNetworkPlugins/group/calico/DNS 0.22
319 TestNetworkPlugins/group/calico/Localhost 0.13
320 TestNetworkPlugins/group/calico/HairPin 0.11
321 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
322 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.18
323 TestNetworkPlugins/group/custom-flannel/DNS 0.15
324 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
325 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
326 TestNetworkPlugins/group/flannel/Start 54.97
327 TestNetworkPlugins/group/bridge/Start 67.08
328 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
329 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.19
330 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
331 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
332 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
333 TestNetworkPlugins/group/flannel/ControllerPod 6.01
334 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
335 TestNetworkPlugins/group/flannel/NetCatPod 9.24
337 TestStartStop/group/old-k8s-version/serial/FirstStart 55.72
338 TestNetworkPlugins/group/flannel/DNS 0.16
339 TestNetworkPlugins/group/flannel/Localhost 0.11
340 TestNetworkPlugins/group/flannel/HairPin 0.11
341 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
342 TestNetworkPlugins/group/bridge/NetCatPod 8.75
344 TestStartStop/group/no-preload/serial/FirstStart 55.85
345 TestNetworkPlugins/group/bridge/DNS 0.16
346 TestNetworkPlugins/group/bridge/Localhost 0.17
347 TestNetworkPlugins/group/bridge/HairPin 0.12
349 TestStartStop/group/embed-certs/serial/FirstStart 46.27
351 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 43.56
353 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.98
354 TestStartStop/group/old-k8s-version/serial/Stop 12.17
357 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
358 TestStartStop/group/old-k8s-version/serial/SecondStart 46.58
359 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.04
360 TestStartStop/group/embed-certs/serial/Stop 12.21
362 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
363 TestStartStop/group/no-preload/serial/Stop 12.14
364 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
365 TestStartStop/group/embed-certs/serial/SecondStart 50.7
366 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
367 TestStartStop/group/no-preload/serial/SecondStart 52.21
368 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
369 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.61
370 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
371 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.06
372 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
373 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
374 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
375 TestStartStop/group/old-k8s-version/serial/Pause 2.89
377 TestStartStop/group/newest-cni/serial/FirstStart 26.8
378 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
379 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
380 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
381 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
382 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
383 TestStartStop/group/embed-certs/serial/Pause 3.21
384 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
386 TestStartStop/group/no-preload/serial/Pause 3.05
387 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
388 TestStartStop/group/newest-cni/serial/DeployApp 0
389 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.86
390 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
391 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.95
392 TestStartStop/group/newest-cni/serial/Stop 1.38
393 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
394 TestStartStop/group/newest-cni/serial/SecondStart 9.96
395 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
396 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
397 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
398 TestStartStop/group/newest-cni/serial/Pause 2.59
x
+
TestDownloadOnly/v1.28.0/json-events (12.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-949265 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-949265 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (12.214287627s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (12.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1123 07:55:46.530764   14479 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1123 07:55:46.530840   14479 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-949265
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-949265: exit status 85 (78.856113ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-949265 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-949265 │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 07:55:34
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 07:55:34.371198   14491 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:55:34.371427   14491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:55:34.371436   14491 out.go:374] Setting ErrFile to fd 2...
	I1123 07:55:34.371440   14491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:55:34.371646   14491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
	W1123 07:55:34.371765   14491 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21966-10922/.minikube/config/config.json: open /home/jenkins/minikube-integration/21966-10922/.minikube/config/config.json: no such file or directory
	I1123 07:55:34.372220   14491 out.go:368] Setting JSON to true
	I1123 07:55:34.373259   14491 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2272,"bootTime":1763882262,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 07:55:34.373330   14491 start.go:143] virtualization: kvm guest
	I1123 07:55:34.377996   14491 out.go:99] [download-only-949265] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1123 07:55:34.378644   14491 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball: no such file or directory
	I1123 07:55:34.378655   14491 notify.go:221] Checking for updates...
	I1123 07:55:34.380104   14491 out.go:171] MINIKUBE_LOCATION=21966
	I1123 07:55:34.381674   14491 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 07:55:34.383035   14491 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig
	I1123 07:55:34.384217   14491 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube
	I1123 07:55:34.385564   14491 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1123 07:55:34.387819   14491 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 07:55:34.388190   14491 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 07:55:34.413434   14491 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 07:55:34.413499   14491 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:55:34.796671   14491 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-23 07:55:34.787069423 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 07:55:34.796782   14491 docker.go:319] overlay module found
	I1123 07:55:34.798650   14491 out.go:99] Using the docker driver based on user configuration
	I1123 07:55:34.798682   14491 start.go:309] selected driver: docker
	I1123 07:55:34.798687   14491 start.go:927] validating driver "docker" against <nil>
	I1123 07:55:34.798773   14491 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:55:34.861452   14491 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-23 07:55:34.852207767 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 07:55:34.861611   14491 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 07:55:34.862217   14491 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1123 07:55:34.862367   14491 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 07:55:34.864320   14491 out.go:171] Using Docker driver with root privileges
	I1123 07:55:34.865580   14491 cni.go:84] Creating CNI manager for ""
	I1123 07:55:34.865640   14491 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 07:55:34.865652   14491 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 07:55:34.865710   14491 start.go:353] cluster config:
	{Name:download-only-949265 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-949265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 07:55:34.867062   14491 out.go:99] Starting "download-only-949265" primary control-plane node in "download-only-949265" cluster
	I1123 07:55:34.867081   14491 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 07:55:34.868354   14491 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1123 07:55:34.868391   14491 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 07:55:34.868525   14491 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 07:55:34.885139   14491 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 07:55:34.885346   14491 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 07:55:34.885464   14491 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 07:55:34.964009   14491 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1123 07:55:34.964046   14491 cache.go:65] Caching tarball of preloaded images
	I1123 07:55:34.964232   14491 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 07:55:34.966150   14491 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1123 07:55:34.966194   14491 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1123 07:55:35.066691   14491 preload.go:295] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1123 07:55:35.066834   14491 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1123 07:55:39.874286   14491 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	
	
	* The control-plane node download-only-949265 host does not exist
	  To start a cluster, run: "minikube start -p download-only-949265"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-949265
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (10.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-290152 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-290152 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.951166145s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (10.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1123 07:55:57.945926   14479 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1123 07:55:57.945970   14479 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-290152
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-290152: exit status 85 (73.291376ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-949265 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-949265 │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:55 UTC │
	│ delete  │ -p download-only-949265                                                                                                                                                               │ download-only-949265 │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:55 UTC │
	│ start   │ -o=json --download-only -p download-only-290152 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-290152 │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 07:55:47
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 07:55:47.050209   14880 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:55:47.050312   14880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:55:47.050318   14880 out.go:374] Setting ErrFile to fd 2...
	I1123 07:55:47.050324   14880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:55:47.050529   14880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
	I1123 07:55:47.050970   14880 out.go:368] Setting JSON to true
	I1123 07:55:47.051875   14880 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2285,"bootTime":1763882262,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 07:55:47.051933   14880 start.go:143] virtualization: kvm guest
	I1123 07:55:47.053930   14880 out.go:99] [download-only-290152] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 07:55:47.054067   14880 notify.go:221] Checking for updates...
	I1123 07:55:47.055444   14880 out.go:171] MINIKUBE_LOCATION=21966
	I1123 07:55:47.056845   14880 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 07:55:47.058194   14880 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig
	I1123 07:55:47.059437   14880 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube
	I1123 07:55:47.060730   14880 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1123 07:55:47.062883   14880 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 07:55:47.063157   14880 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 07:55:47.087871   14880 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 07:55:47.087970   14880 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:55:47.147070   14880 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-23 07:55:47.136877462 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 07:55:47.147174   14880 docker.go:319] overlay module found
	I1123 07:55:47.148810   14880 out.go:99] Using the docker driver based on user configuration
	I1123 07:55:47.148844   14880 start.go:309] selected driver: docker
	I1123 07:55:47.148850   14880 start.go:927] validating driver "docker" against <nil>
	I1123 07:55:47.148937   14880 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:55:47.205455   14880 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-23 07:55:47.195687512 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 07:55:47.205619   14880 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 07:55:47.206103   14880 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1123 07:55:47.206295   14880 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 07:55:47.208092   14880 out.go:171] Using Docker driver with root privileges
	I1123 07:55:47.209231   14880 cni.go:84] Creating CNI manager for ""
	I1123 07:55:47.209298   14880 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 07:55:47.209309   14880 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 07:55:47.209375   14880 start.go:353] cluster config:
	{Name:download-only-290152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-290152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 07:55:47.210593   14880 out.go:99] Starting "download-only-290152" primary control-plane node in "download-only-290152" cluster
	I1123 07:55:47.210618   14880 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 07:55:47.211597   14880 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1123 07:55:47.211631   14880 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 07:55:47.211712   14880 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 07:55:47.228404   14880 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 07:55:47.228553   14880 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 07:55:47.228571   14880 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 07:55:47.228576   14880 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 07:55:47.228586   14880 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 07:55:47.303723   14880 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1123 07:55:47.303777   14880 cache.go:65] Caching tarball of preloaded images
	I1123 07:55:47.303966   14880 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 07:55:47.305651   14880 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1123 07:55:47.305673   14880 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1123 07:55:47.403556   14880 preload.go:295] Got checksum from GCS API "5d6e976daeaa84851976fc4d674fd8f4"
	I1123 07:55:47.403606   14880 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:5d6e976daeaa84851976fc4d674fd8f4 -> /home/jenkins/minikube-integration/21966-10922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-290152 host does not exist
	  To start a cluster, run: "minikube start -p download-only-290152"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-290152
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-454270 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-454270" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-454270
--- PASS: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I1123 07:55:59.101270   14479 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-719883 --alsologtostderr --binary-mirror http://127.0.0.1:42377 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-719883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-719883
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (53.39s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-666053 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-666053 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (50.70414731s)
helpers_test.go:175: Cleaning up "offline-containerd-666053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-666053
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-666053: (2.687007819s)
--- PASS: TestOffline (53.39s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-668375
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-668375: exit status 85 (63.388725ms)

                                                
                                                
-- stdout --
	* Profile "addons-668375" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-668375"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-668375
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-668375: exit status 85 (64.456374ms)

                                                
                                                
-- stdout --
	* Profile "addons-668375" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-668375"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (123.31s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-668375 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-668375 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m3.310736249s)
--- PASS: TestAddons/Setup (123.31s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.16s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:876: volcano-admission stabilized in 15.752485ms
addons_test.go:884: volcano-controller stabilized in 15.792794ms
addons_test.go:868: volcano-scheduler stabilized in 15.817818ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-sfsc4" [497857c5-a6a8-436e-bc43-6921648c6195] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003639964s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-q46qj" [6ee96796-7ebc-45d0-9539-12dc6ecc20ca] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004099002s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-rf4jw" [a51dd6c9-2789-4cb8-a62c-00c78d9d3b43] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004242766s
addons_test.go:903: (dbg) Run:  kubectl --context addons-668375 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-668375 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-668375 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [5a12146d-cb36-4ab1-843a-dc9538d6e395] Pending
helpers_test.go:352: "test-job-nginx-0" [5a12146d-cb36-4ab1-843a-dc9538d6e395] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [5a12146d-cb36-4ab1-843a-dc9538d6e395] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.003570615s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-668375 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-668375 addons disable volcano --alsologtostderr -v=1: (11.814586298s)
--- PASS: TestAddons/serial/Volcano (38.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-668375 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-668375 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.45s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-668375 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-668375 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5f7ff90d-a6fd-4f20-b5ce-c3544789e7a6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5f7ff90d-a6fd-4f20-b5ce-c3544789e7a6] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003431467s
addons_test.go:694: (dbg) Run:  kubectl --context addons-668375 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-668375 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-668375 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.45s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.685053ms
I1123 07:58:59.963877   14479 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1123 07:58:59.963893   14479 kapi.go:107] duration metric: took 2.844159ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-nrvrz" [8c517f42-83e4-475c-b152-133142959efd] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003195373s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-5zlt8" [cb6e37f8-f5db-416e-8b64-f0ff3c198211] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002930527s
addons_test.go:392: (dbg) Run:  kubectl --context addons-668375 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-668375 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-668375 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.928756202s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-668375 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-668375 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.73s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.952825ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-668375
addons_test.go:332: (dbg) Run:  kubectl --context addons-668375 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-668375 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-668375 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-668375 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-668375 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [5b8e9d86-cb5d-4b57-a177-1510f5f5a9a1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [5b8e9d86-cb5d-4b57-a177-1510f5f5a9a1] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004152897s
I1123 07:59:38.633750   14479 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-668375 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-668375 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-668375 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-668375 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-668375 addons disable ingress-dns --alsologtostderr -v=1: (1.638082423s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-668375 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-668375 addons disable ingress --alsologtostderr -v=1: (7.705377811s)
--- PASS: TestAddons/parallel/Ingress (20.55s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.64s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-h6l7c" [1314cfd5-b6fb-49c5-bdf7-4b20863564c0] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003578533s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-668375 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-668375 addons disable inspektor-gadget --alsologtostderr -v=1: (5.635599376s)
--- PASS: TestAddons/parallel/InspektorGadget (10.64s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 2.818086ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-ch526" [91b1c5eb-4bdb-4e45-aa7e-ae7f6570ce40] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002614398s
addons_test.go:463: (dbg) Run:  kubectl --context addons-668375 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-668375 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.65s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 2.853208ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-668375 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc -o jsonpath={.status.phase} -n default
2025/11/23 07:59:14 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-668375 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [d8725f94-98f8-4e89-87f2-198042e75772] Pending
helpers_test.go:352: "task-pv-pod" [d8725f94-98f8-4e89-87f2-198042e75772] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [d8725f94-98f8-4e89-87f2-198042e75772] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003233368s
addons_test.go:572: (dbg) Run:  kubectl --context addons-668375 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-668375 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-668375 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-668375 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-668375 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-668375 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-668375 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [fac80dfb-e4ed-4a28-bdef-cff7b8db43c0] Pending
helpers_test.go:352: "task-pv-pod-restore" [fac80dfb-e4ed-4a28-bdef-cff7b8db43c0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [fac80dfb-e4ed-4a28-bdef-cff7b8db43c0] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003761111s
addons_test.go:614: (dbg) Run:  kubectl --context addons-668375 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-668375 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-668375 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-668375 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-668375 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-668375 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.541619289s)
--- PASS: TestAddons/parallel/CSI (56.56s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-668375 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-gmk44" [2207b42e-3d27-4bc1-a58f-8d144d1d72be] Pending
helpers_test.go:352: "headlamp-dfcdc64b-gmk44" [2207b42e-3d27-4bc1-a58f-8d144d1d72be] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-gmk44" [2207b42e-3d27-4bc1-a58f-8d144d1d72be] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.005619327s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-668375 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-668375 addons disable headlamp --alsologtostderr -v=1: (5.785719273s)
--- PASS: TestAddons/parallel/Headlamp (17.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-9w8v6" [cc59e89a-4a28-4a54-9a7f-eaf12a385397] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002939755s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-668375 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.61s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-668375 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-668375 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-668375 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [05dec889-1c8e-4ea6-8081-a73436b7c13f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [05dec889-1c8e-4ea6-8081-a73436b7c13f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [05dec889-1c8e-4ea6-8081-a73436b7c13f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003430725s
addons_test.go:967: (dbg) Run:  kubectl --context addons-668375 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-668375 ssh "cat /opt/local-path-provisioner/pvc-2cfe786c-d591-4e6f-b840-e813edcbef22_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-668375 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-668375 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-668375 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-668375 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.733935178s)
--- PASS: TestAddons/parallel/LocalPath (52.61s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
I1123 07:58:59.961068   14479 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-mvpvw" [7eb266a5-03fb-48a0-baba-29652057e8d3] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003483102s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-668375 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-q6r48" [8e406783-7748-44f6-b666-ac83e0e00c8b] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00371109s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-668375 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-668375 addons disable yakd --alsologtostderr -v=1: (5.678456077s)
--- PASS: TestAddons/parallel/Yakd (10.68s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-tbq6d" [54a79638-3e43-486b-826a-b0445eaad8a0] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003423236s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-668375 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.51s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-668375
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-668375: (12.139095623s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-668375
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-668375
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-668375
--- PASS: TestAddons/StoppedEnableDisable (12.43s)

                                                
                                    
x
+
TestCertOptions (32.4s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-771136 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-771136 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (24.871866404s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-771136 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-771136 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-771136 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-771136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-771136
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-771136: (6.772414425s)
--- PASS: TestCertOptions (32.40s)

                                                
                                    
x
+
TestCertExpiration (212.95s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-215889 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-215889 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (24.769935576s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-215889 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-215889 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.699470491s)
helpers_test.go:175: Cleaning up "cert-expiration-215889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-215889
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-215889: (2.483299155s)
--- PASS: TestCertExpiration (212.95s)

                                                
                                    
x
+
TestForceSystemdFlag (26.7s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-045472 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-045472 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (23.285910349s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-045472 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-045472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-045472
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-045472: (3.102525386s)
--- PASS: TestForceSystemdFlag (26.70s)

                                                
                                    
x
+
TestForceSystemdEnv (33.69s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-764590 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-764590 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.144419875s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-764590 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-764590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-764590
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-764590: (2.207739978s)
--- PASS: TestForceSystemdEnv (33.69s)

                                                
                                    
x
+
TestDockerEnvContainerd (38.67s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-368315 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-368315 --driver=docker  --container-runtime=containerd: (22.894177245s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-368315"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXsfFKVi/agent.38444" SSH_AGENT_PID="38445" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXsfFKVi/agent.38444" SSH_AGENT_PID="38445" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXsfFKVi/agent.38444" SSH_AGENT_PID="38445" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.870133s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXsfFKVi/agent.38444" SSH_AGENT_PID="38445" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-368315" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-368315
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-368315: (1.951488561s)
--- PASS: TestDockerEnvContainerd (38.67s)

                                                
                                    
x
+
TestErrorSpam/setup (20.03s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-829486 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-829486 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-829486 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-829486 --driver=docker  --container-runtime=containerd: (20.032948468s)
--- PASS: TestErrorSpam/setup (20.03s)

                                                
                                    
x
+
TestErrorSpam/start (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-829486 --log_dir /tmp/nospam-829486 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-829486 --log_dir /tmp/nospam-829486 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-829486 --log_dir /tmp/nospam-829486 start --dry-run
--- PASS: TestErrorSpam/start (0.65s)

                                                
                                    
x
+
TestErrorSpam/status (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-829486 --log_dir /tmp/nospam-829486 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-829486 --log_dir /tmp/nospam-829486 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-829486 --log_dir /tmp/nospam-829486 status
--- PASS: TestErrorSpam/status (0.96s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-829486 --log_dir /tmp/nospam-829486 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-829486 --log_dir /tmp/nospam-829486 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-829486 --log_dir /tmp/nospam-829486 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-829486 --log_dir /tmp/nospam-829486 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-829486 --log_dir /tmp/nospam-829486 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-829486 --log_dir /tmp/nospam-829486 unpause
--- PASS: TestErrorSpam/unpause (1.52s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-829486 --log_dir /tmp/nospam-829486 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-829486 --log_dir /tmp/nospam-829486 stop: (1.296655293s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-829486 --log_dir /tmp/nospam-829486 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-829486 --log_dir /tmp/nospam-829486 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21966-10922/.minikube/files/etc/test/nested/copy/14479/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.64s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410903 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-410903 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (37.635254096s)
--- PASS: TestFunctional/serial/StartWithProxy (37.64s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.94s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1123 08:02:13.061897   14479 config.go:182] Loaded profile config "functional-410903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410903 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-410903 --alsologtostderr -v=8: (5.936280982s)
functional_test.go:678: soft start took 5.937063402s for "functional-410903" cluster.
I1123 08:02:18.998641   14479 config.go:182] Loaded profile config "functional-410903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (5.94s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-410903 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-410903 cache add registry.k8s.io/pause:3.1: (1.036113726s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-410903 cache add registry.k8s.io/pause:3.3: (1.175680659s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-410903 /tmp/TestFunctionalserialCacheCmdcacheadd_local317751285/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 cache add minikube-local-cache-test:functional-410903
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-410903 cache add minikube-local-cache-test:functional-410903: (1.546965115s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 cache delete minikube-local-cache-test:functional-410903
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-410903
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410903 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (288.886363ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 kubectl -- --context functional-410903 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-410903 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410903 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1123 08:03:03.308078   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:03.314448   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:03.325914   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:03.347289   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:03.388726   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:03.470246   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:03.631817   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:03.953492   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:04.595611   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:05.877244   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:08.439298   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-410903 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.104843636s)
functional_test.go:776: restart took 43.104952633s for "functional-410903" cluster.
I1123 08:03:09.548043   14479 config.go:182] Loaded profile config "functional-410903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (43.11s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-410903 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-410903 logs: (1.228864898s)
--- PASS: TestFunctional/serial/LogsCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 logs --file /tmp/TestFunctionalserialLogsFileCmd3997822420/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-410903 logs --file /tmp/TestFunctionalserialLogsFileCmd3997822420/001/logs.txt: (1.243007506s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.31s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-410903 apply -f testdata/invalidsvc.yaml
E1123 08:03:13.561154   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-410903
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-410903: exit status 115 (354.164342ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31488 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-410903 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410903 config get cpus: exit status 14 (101.968475ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410903 config get cpus: exit status 14 (97.680817ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-410903 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-410903 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 59960: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.43s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410903 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-410903 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (160.874936ms)

                                                
                                                
-- stdout --
	* [functional-410903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:03:38.341453   59170 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:03:38.341793   59170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:03:38.341807   59170 out.go:374] Setting ErrFile to fd 2...
	I1123 08:03:38.341812   59170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:03:38.342041   59170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
	I1123 08:03:38.342558   59170 out.go:368] Setting JSON to false
	I1123 08:03:38.343619   59170 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2756,"bootTime":1763882262,"procs":259,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:03:38.343679   59170 start.go:143] virtualization: kvm guest
	I1123 08:03:38.345451   59170 out.go:179] * [functional-410903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:03:38.346665   59170 notify.go:221] Checking for updates...
	I1123 08:03:38.346677   59170 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:03:38.347773   59170 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:03:38.350375   59170 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig
	I1123 08:03:38.351524   59170 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube
	I1123 08:03:38.352647   59170 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:03:38.353864   59170 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:03:38.355476   59170 config.go:182] Loaded profile config "functional-410903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:03:38.356318   59170 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:03:38.381286   59170 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:03:38.381382   59170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:03:38.436875   59170 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-23 08:03:38.427396657 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:03:38.436996   59170 docker.go:319] overlay module found
	I1123 08:03:38.438324   59170 out.go:179] * Using the docker driver based on existing profile
	I1123 08:03:38.439308   59170 start.go:309] selected driver: docker
	I1123 08:03:38.439319   59170 start.go:927] validating driver "docker" against &{Name:functional-410903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-410903 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:03:38.439442   59170 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:03:38.441160   59170 out.go:203] 
	W1123 08:03:38.442213   59170 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1123 08:03:38.443401   59170 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410903 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410903 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-410903 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (175.292244ms)

                                                
                                                
-- stdout --
	* [functional-410903] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:03:38.728932   59388 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:03:38.729056   59388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:03:38.729063   59388 out.go:374] Setting ErrFile to fd 2...
	I1123 08:03:38.729069   59388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:03:38.729423   59388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
	I1123 08:03:38.729866   59388 out.go:368] Setting JSON to false
	I1123 08:03:38.730957   59388 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2757,"bootTime":1763882262,"procs":260,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:03:38.731017   59388 start.go:143] virtualization: kvm guest
	I1123 08:03:38.733528   59388 out.go:179] * [functional-410903] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1123 08:03:38.734742   59388 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:03:38.734784   59388 notify.go:221] Checking for updates...
	I1123 08:03:38.736907   59388 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:03:38.738253   59388 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig
	I1123 08:03:38.739650   59388 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube
	I1123 08:03:38.740757   59388 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:03:38.742094   59388 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:03:38.743599   59388 config.go:182] Loaded profile config "functional-410903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:03:38.744376   59388 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:03:38.770722   59388 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:03:38.770839   59388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:03:38.833031   59388 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-23 08:03:38.821690464 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:03:38.833143   59388 docker.go:319] overlay module found
	I1123 08:03:38.834726   59388 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1123 08:03:38.835726   59388 start.go:309] selected driver: docker
	I1123 08:03:38.835737   59388 start.go:927] validating driver "docker" against &{Name:functional-410903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-410903 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:03:38.835824   59388 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:03:38.837530   59388 out.go:203] 
	W1123 08:03:38.838695   59388 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1123 08:03:38.840004   59388 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-410903 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-410903 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-l9cpj" [6464caea-2239-42da-84f0-77681f164467] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-l9cpj" [6464caea-2239-42da-84f0-77681f164467] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003921889s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31295
functional_test.go:1680: http://192.168.49.2:31295: success! body:
Request served by hello-node-connect-7d85dfc575-l9cpj

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31295
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.74s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (31.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [7f5ea00c-629a-440b-8337-33a530af255d] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004149928s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-410903 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-410903 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-410903 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-410903 apply -f testdata/storage-provisioner/pod.yaml
I1123 08:03:30.887935   14479 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [15bb4a26-efd9-4ae3-b678-ba47937a9dc8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [15bb4a26-efd9-4ae3-b678-ba47937a9dc8] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003269113s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-410903 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-410903 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-410903 delete -f testdata/storage-provisioner/pod.yaml: (1.48703812s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-410903 apply -f testdata/storage-provisioner/pod.yaml
I1123 08:03:42.604311   14479 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8a74be78-0912-4e73-b382-787aa5d54679] Pending
helpers_test.go:352: "sp-pod" [8a74be78-0912-4e73-b382-787aa5d54679] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [8a74be78-0912-4e73-b382-787aa5d54679] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.003476758s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-410903 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (31.23s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh -n functional-410903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 cp functional-410903:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1912733280/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh -n functional-410903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh -n functional-410903 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-410903 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-48lpl" [617ae236-6835-4eac-aeaa-d7d69958ee75] Pending
helpers_test.go:352: "mysql-5bb876957f-48lpl" [617ae236-6835-4eac-aeaa-d7d69958ee75] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-48lpl" [617ae236-6835-4eac-aeaa-d7d69958ee75] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.003513388s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-410903 exec mysql-5bb876957f-48lpl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-410903 exec mysql-5bb876957f-48lpl -- mysql -ppassword -e "show databases;": exit status 1 (129.536319ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1123 08:03:33.833714   14479 retry.go:31] will retry after 909.470614ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-410903 exec mysql-5bb876957f-48lpl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-410903 exec mysql-5bb876957f-48lpl -- mysql -ppassword -e "show databases;": exit status 1 (128.567907ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1123 08:03:34.872630   14479 retry.go:31] will retry after 1.56857959s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-410903 exec mysql-5bb876957f-48lpl -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/14479/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "sudo cat /etc/test/nested/copy/14479/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/14479.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "sudo cat /etc/ssl/certs/14479.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/14479.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "sudo cat /usr/share/ca-certificates/14479.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/144792.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "sudo cat /etc/ssl/certs/144792.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/144792.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "sudo cat /usr/share/ca-certificates/144792.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-410903 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410903 ssh "sudo systemctl is-active docker": exit status 1 (347.943404ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410903 ssh "sudo systemctl is-active crio": exit status 1 (354.072968ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-410903 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-410903
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-410903
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-410903 image ls --format short --alsologtostderr:
I1123 08:03:46.707200   61567 out.go:360] Setting OutFile to fd 1 ...
I1123 08:03:46.707544   61567 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:03:46.707560   61567 out.go:374] Setting ErrFile to fd 2...
I1123 08:03:46.707568   61567 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:03:46.707845   61567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
I1123 08:03:46.708524   61567 config.go:182] Loaded profile config "functional-410903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:03:46.708694   61567 config.go:182] Loaded profile config "functional-410903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:03:46.709360   61567 cli_runner.go:164] Run: docker container inspect functional-410903 --format={{.State.Status}}
I1123 08:03:46.731802   61567 ssh_runner.go:195] Run: systemctl --version
I1123 08:03:46.731871   61567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-410903
I1123 08:03:46.753340   61567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/functional-410903/id_rsa Username:docker}
I1123 08:03:46.860620   61567 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-410903 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                     │ alpine             │ sha256:d4918c │ 22.6MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:c3994b │ 27.1MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:c80c8d │ 22.8MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ docker.io/library/nginx                     │ latest             │ sha256:60adc2 │ 59.8MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:7dd6aa │ 17.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ docker.io/kicbase/echo-server               │ functional-410903  │ sha256:9056ab │ 2.37MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:9056ab │ 2.37MB │
│ docker.io/library/minikube-local-cache-test │ functional-410903  │ sha256:436b2c │ 992B   │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:fc2517 │ 26MB   │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ docker.io/library/mysql                     │ 5.7                │ sha256:510733 │ 138MB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-410903 image ls --format table --alsologtostderr:
I1123 08:03:49.967716   63026 out.go:360] Setting OutFile to fd 1 ...
I1123 08:03:49.967844   63026 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:03:49.967855   63026 out.go:374] Setting ErrFile to fd 2...
I1123 08:03:49.967862   63026 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:03:49.968068   63026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
I1123 08:03:49.968633   63026 config.go:182] Loaded profile config "functional-410903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:03:49.968720   63026 config.go:182] Loaded profile config "functional-410903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:03:49.970196   63026 cli_runner.go:164] Run: docker container inspect functional-410903 --format={{.State.Status}}
I1123 08:03:49.988353   63026 ssh_runner.go:195] Run: systemctl --version
I1123 08:03:49.988408   63026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-410903
I1123 08:03:50.005674   63026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/functional-410903/id_rsa Username:docker}
I1123 08:03:50.107173   63026 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-410903 image ls --format json --alsologtostderr:
[{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"17385568"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha
256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-410903","docker.io/kicbase/echo-server:latest"],"size":"2372971"},{"id":"sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22631814"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e
4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"25963718"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b9
43b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"59772801"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"27061991"},{"id":"sha256:c80c8dbafe7dd
71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"22820214"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:436b2c9f0f6a564df914bff1654d5466c4ee7b3e598ef901de24a73a1e6253d1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-410903"],"size":"992"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-410903 image ls --format json --alsologtostderr:
I1123 08:03:49.739152   62972 out.go:360] Setting OutFile to fd 1 ...
I1123 08:03:49.739282   62972 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:03:49.739291   62972 out.go:374] Setting ErrFile to fd 2...
I1123 08:03:49.739295   62972 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:03:49.739527   62972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
I1123 08:03:49.740126   62972 config.go:182] Loaded profile config "functional-410903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:03:49.740218   62972 config.go:182] Loaded profile config "functional-410903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:03:49.740680   62972 cli_runner.go:164] Run: docker container inspect functional-410903 --format={{.State.Status}}
I1123 08:03:49.760133   62972 ssh_runner.go:195] Run: systemctl --version
I1123 08:03:49.760185   62972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-410903
I1123 08:03:49.779002   62972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/functional-410903/id_rsa Username:docker}
I1123 08:03:49.880590   62972 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-410903 image ls --format yaml --alsologtostderr:
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:436b2c9f0f6a564df914bff1654d5466c4ee7b3e598ef901de24a73a1e6253d1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-410903
size: "992"
- id: sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "59772801"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "17385568"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "27061991"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-410903
- docker.io/kicbase/echo-server:latest
size: "2372971"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "22631814"
- id: sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "22820214"
- id: sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "25963718"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-410903 image ls --format yaml --alsologtostderr:
I1123 08:03:46.972958   61702 out.go:360] Setting OutFile to fd 1 ...
I1123 08:03:46.973216   61702 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:03:46.973227   61702 out.go:374] Setting ErrFile to fd 2...
I1123 08:03:46.973233   61702 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:03:46.973467   61702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
I1123 08:03:46.974069   61702 config.go:182] Loaded profile config "functional-410903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:03:46.974282   61702 config.go:182] Loaded profile config "functional-410903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:03:46.974823   61702 cli_runner.go:164] Run: docker container inspect functional-410903 --format={{.State.Status}}
I1123 08:03:46.998917   61702 ssh_runner.go:195] Run: systemctl --version
I1123 08:03:46.998981   61702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-410903
I1123 08:03:47.021061   61702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/functional-410903/id_rsa Username:docker}
I1123 08:03:47.130019   61702 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410903 ssh pgrep buildkitd: exit status 1 (316.942553ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 image build -t localhost/my-image:functional-410903 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-410903 image build -t localhost/my-image:functional-410903 testdata/build --alsologtostderr: (3.482251336s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-410903 image build -t localhost/my-image:functional-410903 testdata/build --alsologtostderr:
I1123 08:03:47.548397   62108 out.go:360] Setting OutFile to fd 1 ...
I1123 08:03:47.548570   62108 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:03:47.548581   62108 out.go:374] Setting ErrFile to fd 2...
I1123 08:03:47.548585   62108 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:03:47.548783   62108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
I1123 08:03:47.549526   62108 config.go:182] Loaded profile config "functional-410903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:03:47.550176   62108 config.go:182] Loaded profile config "functional-410903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:03:47.550692   62108 cli_runner.go:164] Run: docker container inspect functional-410903 --format={{.State.Status}}
I1123 08:03:47.568767   62108 ssh_runner.go:195] Run: systemctl --version
I1123 08:03:47.568842   62108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-410903
I1123 08:03:47.590730   62108 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/functional-410903/id_rsa Username:docker}
I1123 08:03:47.691296   62108 build_images.go:162] Building image from path: /tmp/build.1317783156.tar
I1123 08:03:47.691376   62108 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1123 08:03:47.700562   62108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1317783156.tar
I1123 08:03:47.706112   62108 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1317783156.tar: stat -c "%s %y" /var/lib/minikube/build/build.1317783156.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1317783156.tar': No such file or directory
I1123 08:03:47.706146   62108 ssh_runner.go:362] scp /tmp/build.1317783156.tar --> /var/lib/minikube/build/build.1317783156.tar (3072 bytes)
I1123 08:03:47.730935   62108 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1317783156
I1123 08:03:47.743008   62108 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1317783156 -xf /var/lib/minikube/build/build.1317783156.tar
I1123 08:03:47.757318   62108 containerd.go:394] Building image: /var/lib/minikube/build/build.1317783156
I1123 08:03:47.757411   62108 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1317783156 --local dockerfile=/var/lib/minikube/build/build.1317783156 --output type=image,name=localhost/my-image:functional-410903
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:d56bd4a2f9aad33480de994d016fc96660a675b6b63babdc00e8858cd928e2e5 done
#8 exporting config sha256:7f7ba6c7c9d539d5c300c830ed3d6c21b6c8726655aeadf01ee2abedebc84682 done
#8 naming to localhost/my-image:functional-410903
#8 naming to localhost/my-image:functional-410903 done
#8 DONE 0.1s
I1123 08:03:50.943654   62108 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1317783156 --local dockerfile=/var/lib/minikube/build/build.1317783156 --output type=image,name=localhost/my-image:functional-410903: (3.186207709s)
I1123 08:03:50.943724   62108 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1317783156
I1123 08:03:50.952757   62108 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1317783156.tar
I1123 08:03:50.960803   62108 build_images.go:218] Built localhost/my-image:functional-410903 from /tmp/build.1317783156.tar
I1123 08:03:50.960836   62108 build_images.go:134] succeeded building to: functional-410903
I1123 08:03:50.960843   62108 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 image ls
2025/11/23 08:03:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.80312147s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-410903
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (16.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-410903 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-410903 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-t9rch" [a43cc019-e302-41bd-952c-529b0af1f6ab] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-t9rch" [a43cc019-e302-41bd-952c-529b0af1f6ab] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 16.003473995s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (16.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-410903 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-410903 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-410903 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-410903 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 55582: os: process already finished
helpers_test.go:525: unable to kill pid 55377: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 image load --daemon kicbase/echo-server:functional-410903 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-410903 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-410903 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [bcf365d8-6ad7-4b00-be85-2714681fcb08] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [bcf365d8-6ad7-4b00-be85-2714681fcb08] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 17.003308219s
I1123 08:03:36.721527   14479 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 image load --daemon kicbase/echo-server:functional-410903 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-410903
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 image load --daemon kicbase/echo-server:functional-410903 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 image save kicbase/echo-server:functional-410903 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
E1123 08:03:23.802762   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 image rm kicbase/echo-server:functional-410903 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-410903
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 image save --daemon kicbase/echo-server:functional-410903 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-410903
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 service list -o json
functional_test.go:1504: Took "536.375952ms" to run "out/minikube-linux-amd64 -p functional-410903 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30192
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30192
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-410903 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.89.19 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-410903 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "374.53758ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "75.300009ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-410903 /tmp/TestFunctionalparallelMountCmdany-port720204683/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763885017789679632" to /tmp/TestFunctionalparallelMountCmdany-port720204683/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763885017789679632" to /tmp/TestFunctionalparallelMountCmdany-port720204683/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763885017789679632" to /tmp/TestFunctionalparallelMountCmdany-port720204683/001/test-1763885017789679632
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410903 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (332.475274ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:03:38.122522   14479 retry.go:31] will retry after 560.668951ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 23 08:03 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 23 08:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 23 08:03 test-1763885017789679632
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh cat /mount-9p/test-1763885017789679632
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-410903 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [7563750c-1c81-46fd-a97b-202c5fdc5501] Pending
helpers_test.go:352: "busybox-mount" [7563750c-1c81-46fd-a97b-202c5fdc5501] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [7563750c-1c81-46fd-a97b-202c5fdc5501] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E1123 08:03:44.284336   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox-mount" [7563750c-1c81-46fd-a97b-202c5fdc5501] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00321367s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-410903 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410903 /tmp/TestFunctionalparallelMountCmdany-port720204683/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.05s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "411.992224ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "59.010236ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-410903 /tmp/TestFunctionalparallelMountCmdspecific-port1419836026/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410903 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (351.084514ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:03:46.187959   14479 retry.go:31] will retry after 342.642945ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410903 /tmp/TestFunctionalparallelMountCmdspecific-port1419836026/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410903 ssh "sudo umount -f /mount-9p": exit status 1 (306.999704ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-410903 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410903 /tmp/TestFunctionalparallelMountCmdspecific-port1419836026/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-410903 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1299388121/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-410903 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1299388121/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-410903 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1299388121/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410903 ssh "findmnt -T" /mount1: exit status 1 (378.889436ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:03:48.078189   14479 retry.go:31] will retry after 606.446962ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-410903 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-410903 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410903 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1299388121/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410903 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1299388121/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410903 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1299388121/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.99s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-410903
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-410903
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-410903
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (125.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1123 08:04:25.246382   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:05:47.169648   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-003512 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m4.485529054s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (125.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-003512 kubectl -- rollout status deployment/busybox: (3.315035208s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 kubectl -- exec busybox-7b57f96db7-ch9hq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 kubectl -- exec busybox-7b57f96db7-l6kf5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 kubectl -- exec busybox-7b57f96db7-t6rn7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 kubectl -- exec busybox-7b57f96db7-ch9hq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 kubectl -- exec busybox-7b57f96db7-l6kf5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 kubectl -- exec busybox-7b57f96db7-t6rn7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 kubectl -- exec busybox-7b57f96db7-ch9hq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 kubectl -- exec busybox-7b57f96db7-l6kf5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 kubectl -- exec busybox-7b57f96db7-t6rn7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 kubectl -- exec busybox-7b57f96db7-ch9hq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 kubectl -- exec busybox-7b57f96db7-ch9hq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 kubectl -- exec busybox-7b57f96db7-l6kf5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 kubectl -- exec busybox-7b57f96db7-l6kf5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 kubectl -- exec busybox-7b57f96db7-t6rn7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 kubectl -- exec busybox-7b57f96db7-t6rn7 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (25.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-003512 node add --alsologtostderr -v 5: (24.962794636s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (25.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-003512 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 cp testdata/cp-test.txt ha-003512:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 cp ha-003512:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile681180062/001/cp-test_ha-003512.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 cp ha-003512:/home/docker/cp-test.txt ha-003512-m02:/home/docker/cp-test_ha-003512_ha-003512-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m02 "sudo cat /home/docker/cp-test_ha-003512_ha-003512-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 cp ha-003512:/home/docker/cp-test.txt ha-003512-m03:/home/docker/cp-test_ha-003512_ha-003512-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m03 "sudo cat /home/docker/cp-test_ha-003512_ha-003512-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 cp ha-003512:/home/docker/cp-test.txt ha-003512-m04:/home/docker/cp-test_ha-003512_ha-003512-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m04 "sudo cat /home/docker/cp-test_ha-003512_ha-003512-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 cp testdata/cp-test.txt ha-003512-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 cp ha-003512-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile681180062/001/cp-test_ha-003512-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 cp ha-003512-m02:/home/docker/cp-test.txt ha-003512:/home/docker/cp-test_ha-003512-m02_ha-003512.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512 "sudo cat /home/docker/cp-test_ha-003512-m02_ha-003512.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 cp ha-003512-m02:/home/docker/cp-test.txt ha-003512-m03:/home/docker/cp-test_ha-003512-m02_ha-003512-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m03 "sudo cat /home/docker/cp-test_ha-003512-m02_ha-003512-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 cp ha-003512-m02:/home/docker/cp-test.txt ha-003512-m04:/home/docker/cp-test_ha-003512-m02_ha-003512-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m04 "sudo cat /home/docker/cp-test_ha-003512-m02_ha-003512-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 cp testdata/cp-test.txt ha-003512-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 cp ha-003512-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile681180062/001/cp-test_ha-003512-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 cp ha-003512-m03:/home/docker/cp-test.txt ha-003512:/home/docker/cp-test_ha-003512-m03_ha-003512.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512 "sudo cat /home/docker/cp-test_ha-003512-m03_ha-003512.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 cp ha-003512-m03:/home/docker/cp-test.txt ha-003512-m02:/home/docker/cp-test_ha-003512-m03_ha-003512-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m02 "sudo cat /home/docker/cp-test_ha-003512-m03_ha-003512-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 cp ha-003512-m03:/home/docker/cp-test.txt ha-003512-m04:/home/docker/cp-test_ha-003512-m03_ha-003512-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m04 "sudo cat /home/docker/cp-test_ha-003512-m03_ha-003512-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 cp testdata/cp-test.txt ha-003512-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 cp ha-003512-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile681180062/001/cp-test_ha-003512-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 cp ha-003512-m04:/home/docker/cp-test.txt ha-003512:/home/docker/cp-test_ha-003512-m04_ha-003512.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512 "sudo cat /home/docker/cp-test_ha-003512-m04_ha-003512.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 cp ha-003512-m04:/home/docker/cp-test.txt ha-003512-m02:/home/docker/cp-test_ha-003512-m04_ha-003512-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m02 "sudo cat /home/docker/cp-test_ha-003512-m04_ha-003512-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 cp ha-003512-m04:/home/docker/cp-test.txt ha-003512-m03:/home/docker/cp-test_ha-003512-m04_ha-003512-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 ssh -n ha-003512-m03 "sudo cat /home/docker/cp-test_ha-003512-m04_ha-003512-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-003512 node stop m02 --alsologtostderr -v 5: (12.046369747s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-003512 status --alsologtostderr -v 5: exit status 7 (710.187657ms)

                                                
                                                
-- stdout --
	ha-003512
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-003512-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-003512-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-003512-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:07:08.462052   83938 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:07:08.462330   83938 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:07:08.462343   83938 out.go:374] Setting ErrFile to fd 2...
	I1123 08:07:08.462350   83938 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:07:08.462612   83938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
	I1123 08:07:08.462798   83938 out.go:368] Setting JSON to false
	I1123 08:07:08.462827   83938 mustload.go:66] Loading cluster: ha-003512
	I1123 08:07:08.462984   83938 notify.go:221] Checking for updates...
	I1123 08:07:08.463303   83938 config.go:182] Loaded profile config "ha-003512": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:07:08.463329   83938 status.go:174] checking status of ha-003512 ...
	I1123 08:07:08.463857   83938 cli_runner.go:164] Run: docker container inspect ha-003512 --format={{.State.Status}}
	I1123 08:07:08.484869   83938 status.go:371] ha-003512 host status = "Running" (err=<nil>)
	I1123 08:07:08.484900   83938 host.go:66] Checking if "ha-003512" exists ...
	I1123 08:07:08.485149   83938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-003512
	I1123 08:07:08.503675   83938 host.go:66] Checking if "ha-003512" exists ...
	I1123 08:07:08.504000   83938 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:07:08.504038   83938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-003512
	I1123 08:07:08.521864   83938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/ha-003512/id_rsa Username:docker}
	I1123 08:07:08.621030   83938 ssh_runner.go:195] Run: systemctl --version
	I1123 08:07:08.627340   83938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:07:08.640331   83938 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:07:08.699879   83938 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:07:08.690093621 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:07:08.700383   83938 kubeconfig.go:125] found "ha-003512" server: "https://192.168.49.254:8443"
	I1123 08:07:08.700408   83938 api_server.go:166] Checking apiserver status ...
	I1123 08:07:08.700447   83938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:07:08.712368   83938 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1348/cgroup
	W1123 08:07:08.720583   83938 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1348/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:07:08.720627   83938 ssh_runner.go:195] Run: ls
	I1123 08:07:08.724665   83938 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 08:07:08.730220   83938 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 08:07:08.730262   83938 status.go:463] ha-003512 apiserver status = Running (err=<nil>)
	I1123 08:07:08.730271   83938 status.go:176] ha-003512 status: &{Name:ha-003512 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:07:08.730286   83938 status.go:174] checking status of ha-003512-m02 ...
	I1123 08:07:08.730609   83938 cli_runner.go:164] Run: docker container inspect ha-003512-m02 --format={{.State.Status}}
	I1123 08:07:08.748751   83938 status.go:371] ha-003512-m02 host status = "Stopped" (err=<nil>)
	I1123 08:07:08.748772   83938 status.go:384] host is not running, skipping remaining checks
	I1123 08:07:08.748779   83938 status.go:176] ha-003512-m02 status: &{Name:ha-003512-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:07:08.748797   83938 status.go:174] checking status of ha-003512-m03 ...
	I1123 08:07:08.749059   83938 cli_runner.go:164] Run: docker container inspect ha-003512-m03 --format={{.State.Status}}
	I1123 08:07:08.766843   83938 status.go:371] ha-003512-m03 host status = "Running" (err=<nil>)
	I1123 08:07:08.766864   83938 host.go:66] Checking if "ha-003512-m03" exists ...
	I1123 08:07:08.767162   83938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-003512-m03
	I1123 08:07:08.785168   83938 host.go:66] Checking if "ha-003512-m03" exists ...
	I1123 08:07:08.785431   83938 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:07:08.785468   83938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-003512-m03
	I1123 08:07:08.804024   83938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/ha-003512-m03/id_rsa Username:docker}
	I1123 08:07:08.902978   83938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:07:08.915911   83938 kubeconfig.go:125] found "ha-003512" server: "https://192.168.49.254:8443"
	I1123 08:07:08.915941   83938 api_server.go:166] Checking apiserver status ...
	I1123 08:07:08.915981   83938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:07:08.927389   83938 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1267/cgroup
	W1123 08:07:08.935809   83938 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1267/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:07:08.935854   83938 ssh_runner.go:195] Run: ls
	I1123 08:07:08.939566   83938 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 08:07:08.943999   83938 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 08:07:08.944019   83938 status.go:463] ha-003512-m03 apiserver status = Running (err=<nil>)
	I1123 08:07:08.944026   83938 status.go:176] ha-003512-m03 status: &{Name:ha-003512-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:07:08.944042   83938 status.go:174] checking status of ha-003512-m04 ...
	I1123 08:07:08.944326   83938 cli_runner.go:164] Run: docker container inspect ha-003512-m04 --format={{.State.Status}}
	I1123 08:07:08.962538   83938 status.go:371] ha-003512-m04 host status = "Running" (err=<nil>)
	I1123 08:07:08.962567   83938 host.go:66] Checking if "ha-003512-m04" exists ...
	I1123 08:07:08.962924   83938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-003512-m04
	I1123 08:07:08.982211   83938 host.go:66] Checking if "ha-003512-m04" exists ...
	I1123 08:07:08.982452   83938 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:07:08.982483   83938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-003512-m04
	I1123 08:07:09.000211   83938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/ha-003512-m04/id_rsa Username:docker}
	I1123 08:07:09.098590   83938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:07:09.111299   83938 status.go:176] ha-003512-m04 status: &{Name:ha-003512-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-003512 node start m02 --alsologtostderr -v 5: (7.657571272s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-003512 stop --alsologtostderr -v 5: (37.294858904s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 start --wait true --alsologtostderr -v 5
E1123 08:08:03.304593   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:08:17.701344   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:08:17.708454   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:08:17.719850   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:08:17.741453   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:08:17.782895   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:08:17.864459   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:08:18.027267   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:08:18.349657   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:08:18.991741   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:08:20.273079   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:08:22.834420   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:08:27.956082   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:08:31.011000   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:08:38.197619   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-003512 start --wait true --alsologtostderr -v 5: (1m1.851497379s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 node list --alsologtostderr -v 5
E1123 08:08:58.679254   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-003512 node delete m03 --alsologtostderr -v 5: (8.518863974s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 stop --alsologtostderr -v 5
E1123 08:09:39.641089   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-003512 stop --alsologtostderr -v 5: (36.04237529s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-003512 status --alsologtostderr -v 5: exit status 7 (118.071884ms)

                                                
                                                
-- stdout --
	ha-003512
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-003512-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-003512-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:09:44.823671  100208 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:09:44.823822  100208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:09:44.823832  100208 out.go:374] Setting ErrFile to fd 2...
	I1123 08:09:44.823839  100208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:09:44.824050  100208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
	I1123 08:09:44.824257  100208 out.go:368] Setting JSON to false
	I1123 08:09:44.824290  100208 mustload.go:66] Loading cluster: ha-003512
	I1123 08:09:44.824399  100208 notify.go:221] Checking for updates...
	I1123 08:09:44.824689  100208 config.go:182] Loaded profile config "ha-003512": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:09:44.824714  100208 status.go:174] checking status of ha-003512 ...
	I1123 08:09:44.825169  100208 cli_runner.go:164] Run: docker container inspect ha-003512 --format={{.State.Status}}
	I1123 08:09:44.844164  100208 status.go:371] ha-003512 host status = "Stopped" (err=<nil>)
	I1123 08:09:44.844208  100208 status.go:384] host is not running, skipping remaining checks
	I1123 08:09:44.844216  100208 status.go:176] ha-003512 status: &{Name:ha-003512 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:09:44.844257  100208 status.go:174] checking status of ha-003512-m02 ...
	I1123 08:09:44.844662  100208 cli_runner.go:164] Run: docker container inspect ha-003512-m02 --format={{.State.Status}}
	I1123 08:09:44.862974  100208 status.go:371] ha-003512-m02 host status = "Stopped" (err=<nil>)
	I1123 08:09:44.862993  100208 status.go:384] host is not running, skipping remaining checks
	I1123 08:09:44.862999  100208 status.go:176] ha-003512-m02 status: &{Name:ha-003512-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:09:44.863019  100208 status.go:174] checking status of ha-003512-m04 ...
	I1123 08:09:44.863258  100208 cli_runner.go:164] Run: docker container inspect ha-003512-m04 --format={{.State.Status}}
	I1123 08:09:44.880885  100208 status.go:371] ha-003512-m04 host status = "Stopped" (err=<nil>)
	I1123 08:09:44.880913  100208 status.go:384] host is not running, skipping remaining checks
	I1123 08:09:44.880919  100208 status.go:176] ha-003512-m04 status: &{Name:ha-003512-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (57.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-003512 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (56.225665429s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (57.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (38.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 node add --control-plane --alsologtostderr -v 5
E1123 08:11:01.564787   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-003512 node add --control-plane --alsologtostderr -v 5: (37.591076532s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-003512 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (38.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.38s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-352372 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-352372 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (41.380202806s)
--- PASS: TestJSONOutput/start/Command (41.38s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-352372 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-352372 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-352372 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-352372 --output=json --user=testUser: (5.873062153s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-671088 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-671088 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (75.020853ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c7844e4d-4826-4d6c-9046-3036da32a107","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-671088] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"aa1a63a5-78e3-419d-be7f-7cf0ad6de828","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21966"}}
	{"specversion":"1.0","id":"d4b63d4a-6259-4627-95dc-06a999f79512","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1d1a5da6-f118-41e4-8bc8-0b0609fce929","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig"}}
	{"specversion":"1.0","id":"391b895a-c56f-4d0a-93e5-4969793279a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube"}}
	{"specversion":"1.0","id":"5982640f-8186-41a6-a9cd-6f46a427df39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"94419bbb-b5d8-4c89-8f53-4e54d11abd19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"98279f3d-6a8a-43c3-8fe8-6310cfd7494d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-671088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-671088
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.3s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-253491 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-253491 --network=: (35.132050746s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-253491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-253491
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-253491: (2.147384351s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.30s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-058573 --network=bridge
E1123 08:13:03.306254   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:13:17.703695   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-058573 --network=bridge: (24.054294218s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-058573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-058573
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-058573: (1.993982341s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.07s)

                                                
                                    
x
+
TestKicExistingNetwork (23.55s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1123 08:13:26.519203   14479 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1123 08:13:26.537839   14479 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1123 08:13:26.537901   14479 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1123 08:13:26.537919   14479 cli_runner.go:164] Run: docker network inspect existing-network
W1123 08:13:26.554754   14479 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1123 08:13:26.554782   14479 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1123 08:13:26.554803   14479 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1123 08:13:26.554929   14479 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1123 08:13:26.572447   14479 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-88eb84305350 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:b0:8c:95:93:f7} reservation:<nil>}
I1123 08:13:26.572977   14479 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb5160}
I1123 08:13:26.573009   14479 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1123 08:13:26.573059   14479 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1123 08:13:26.621351   14479 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-373800 --network=existing-network
E1123 08:13:45.406264   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-373800 --network=existing-network: (21.419564448s)
helpers_test.go:175: Cleaning up "existing-network-373800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-373800
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-373800: (1.99016484s)
I1123 08:13:50.048784   14479 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.55s)

                                                
                                    
x
+
TestKicCustomSubnet (26.94s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-646076 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-646076 --subnet=192.168.60.0/24: (24.737885788s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-646076 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-646076" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-646076
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-646076: (2.177794278s)
--- PASS: TestKicCustomSubnet (26.94s)

                                                
                                    
x
+
TestKicStaticIP (27.37s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-716377 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-716377 --static-ip=192.168.200.200: (25.111734519s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-716377 ip
helpers_test.go:175: Cleaning up "static-ip-716377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-716377
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-716377: (2.112100113s)
--- PASS: TestKicStaticIP (27.37s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (52.32s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-632668 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-632668 --driver=docker  --container-runtime=containerd: (24.044763331s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-634737 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-634737 --driver=docker  --container-runtime=containerd: (22.334395838s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-632668
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-634737
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-634737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-634737
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-634737: (2.340959139s)
helpers_test.go:175: Cleaning up "first-632668" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-632668
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-632668: (2.34954344s)
--- PASS: TestMinikubeProfile (52.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-102542 --memory=3072 --mount-string /tmp/TestMountStartserial2705952210/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-102542 --memory=3072 --mount-string /tmp/TestMountStartserial2705952210/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.447707177s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-102542 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-122862 --memory=3072 --mount-string /tmp/TestMountStartserial2705952210/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-122862 --memory=3072 --mount-string /tmp/TestMountStartserial2705952210/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.498044065s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-122862 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-102542 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-102542 --alsologtostderr -v=5: (1.678276626s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-122862 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-122862
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-122862: (1.259767648s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.7s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-122862
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-122862: (6.695938714s)
--- PASS: TestMountStart/serial/RestartStopped (7.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-122862 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-424622 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-424622 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m4.558225399s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-424622 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-424622 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-424622 -- rollout status deployment/busybox: (3.575073023s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-424622 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-424622 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-424622 -- exec busybox-7b57f96db7-2z5qt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-424622 -- exec busybox-7b57f96db7-x7nxh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-424622 -- exec busybox-7b57f96db7-2z5qt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-424622 -- exec busybox-7b57f96db7-x7nxh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-424622 -- exec busybox-7b57f96db7-2z5qt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-424622 -- exec busybox-7b57f96db7-x7nxh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.08s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-424622 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-424622 -- exec busybox-7b57f96db7-2z5qt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-424622 -- exec busybox-7b57f96db7-2z5qt -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-424622 -- exec busybox-7b57f96db7-x7nxh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-424622 -- exec busybox-7b57f96db7-x7nxh -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (25.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-424622 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-424622 -v=5 --alsologtostderr: (24.636744899s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (25.29s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-424622 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 cp testdata/cp-test.txt multinode-424622:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 ssh -n multinode-424622 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 cp multinode-424622:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1927751822/001/cp-test_multinode-424622.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 ssh -n multinode-424622 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 cp multinode-424622:/home/docker/cp-test.txt multinode-424622-m02:/home/docker/cp-test_multinode-424622_multinode-424622-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 ssh -n multinode-424622 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 ssh -n multinode-424622-m02 "sudo cat /home/docker/cp-test_multinode-424622_multinode-424622-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 cp multinode-424622:/home/docker/cp-test.txt multinode-424622-m03:/home/docker/cp-test_multinode-424622_multinode-424622-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 ssh -n multinode-424622 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 ssh -n multinode-424622-m03 "sudo cat /home/docker/cp-test_multinode-424622_multinode-424622-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 cp testdata/cp-test.txt multinode-424622-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 ssh -n multinode-424622-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 cp multinode-424622-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1927751822/001/cp-test_multinode-424622-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 ssh -n multinode-424622-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 cp multinode-424622-m02:/home/docker/cp-test.txt multinode-424622:/home/docker/cp-test_multinode-424622-m02_multinode-424622.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 ssh -n multinode-424622-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 ssh -n multinode-424622 "sudo cat /home/docker/cp-test_multinode-424622-m02_multinode-424622.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 cp multinode-424622-m02:/home/docker/cp-test.txt multinode-424622-m03:/home/docker/cp-test_multinode-424622-m02_multinode-424622-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 ssh -n multinode-424622-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 ssh -n multinode-424622-m03 "sudo cat /home/docker/cp-test_multinode-424622-m02_multinode-424622-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 cp testdata/cp-test.txt multinode-424622-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 ssh -n multinode-424622-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 cp multinode-424622-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1927751822/001/cp-test_multinode-424622-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 ssh -n multinode-424622-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 cp multinode-424622-m03:/home/docker/cp-test.txt multinode-424622:/home/docker/cp-test_multinode-424622-m03_multinode-424622.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 ssh -n multinode-424622-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 ssh -n multinode-424622 "sudo cat /home/docker/cp-test_multinode-424622-m03_multinode-424622.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 cp multinode-424622-m03:/home/docker/cp-test.txt multinode-424622-m02:/home/docker/cp-test_multinode-424622-m03_multinode-424622-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 ssh -n multinode-424622-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 ssh -n multinode-424622-m02 "sudo cat /home/docker/cp-test_multinode-424622-m03_multinode-424622-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-424622 node stop m03: (1.276342535s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-424622 status: exit status 7 (513.957092ms)

                                                
                                                
-- stdout --
	multinode-424622
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-424622-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-424622-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-424622 status --alsologtostderr: exit status 7 (524.208286ms)

                                                
                                                
-- stdout --
	multinode-424622
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-424622-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-424622-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:17:51.097422  162629 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:17:51.097530  162629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:17:51.097534  162629 out.go:374] Setting ErrFile to fd 2...
	I1123 08:17:51.097539  162629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:17:51.097703  162629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
	I1123 08:17:51.097858  162629 out.go:368] Setting JSON to false
	I1123 08:17:51.097886  162629 mustload.go:66] Loading cluster: multinode-424622
	I1123 08:17:51.098028  162629 notify.go:221] Checking for updates...
	I1123 08:17:51.098305  162629 config.go:182] Loaded profile config "multinode-424622": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:17:51.098329  162629 status.go:174] checking status of multinode-424622 ...
	I1123 08:17:51.098794  162629 cli_runner.go:164] Run: docker container inspect multinode-424622 --format={{.State.Status}}
	I1123 08:17:51.120468  162629 status.go:371] multinode-424622 host status = "Running" (err=<nil>)
	I1123 08:17:51.120493  162629 host.go:66] Checking if "multinode-424622" exists ...
	I1123 08:17:51.120755  162629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-424622
	I1123 08:17:51.140347  162629 host.go:66] Checking if "multinode-424622" exists ...
	I1123 08:17:51.140695  162629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:17:51.140755  162629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-424622
	I1123 08:17:51.162299  162629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/multinode-424622/id_rsa Username:docker}
	I1123 08:17:51.261910  162629 ssh_runner.go:195] Run: systemctl --version
	I1123 08:17:51.268466  162629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:17:51.281028  162629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:17:51.340064  162629 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-23 08:17:51.330187045 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:17:51.340640  162629 kubeconfig.go:125] found "multinode-424622" server: "https://192.168.67.2:8443"
	I1123 08:17:51.340669  162629 api_server.go:166] Checking apiserver status ...
	I1123 08:17:51.340709  162629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:17:51.353176  162629 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1318/cgroup
	W1123 08:17:51.361977  162629 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1318/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:17:51.362044  162629 ssh_runner.go:195] Run: ls
	I1123 08:17:51.365836  162629 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1123 08:17:51.370061  162629 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1123 08:17:51.370086  162629 status.go:463] multinode-424622 apiserver status = Running (err=<nil>)
	I1123 08:17:51.370095  162629 status.go:176] multinode-424622 status: &{Name:multinode-424622 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:17:51.370108  162629 status.go:174] checking status of multinode-424622-m02 ...
	I1123 08:17:51.370345  162629 cli_runner.go:164] Run: docker container inspect multinode-424622-m02 --format={{.State.Status}}
	I1123 08:17:51.389878  162629 status.go:371] multinode-424622-m02 host status = "Running" (err=<nil>)
	I1123 08:17:51.389899  162629 host.go:66] Checking if "multinode-424622-m02" exists ...
	I1123 08:17:51.390147  162629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-424622-m02
	I1123 08:17:51.407858  162629 host.go:66] Checking if "multinode-424622-m02" exists ...
	I1123 08:17:51.408197  162629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:17:51.408240  162629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-424622-m02
	I1123 08:17:51.425642  162629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21966-10922/.minikube/machines/multinode-424622-m02/id_rsa Username:docker}
	I1123 08:17:51.523836  162629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:17:51.535923  162629 status.go:176] multinode-424622-m02 status: &{Name:multinode-424622-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:17:51.535963  162629 status.go:174] checking status of multinode-424622-m03 ...
	I1123 08:17:51.536243  162629 cli_runner.go:164] Run: docker container inspect multinode-424622-m03 --format={{.State.Status}}
	I1123 08:17:51.555031  162629 status.go:371] multinode-424622-m03 host status = "Stopped" (err=<nil>)
	I1123 08:17:51.555056  162629 status.go:384] host is not running, skipping remaining checks
	I1123 08:17:51.555062  162629 status.go:176] multinode-424622-m03 status: &{Name:multinode-424622-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-424622 node start m03 -v=5 --alsologtostderr: (6.204920232s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (69.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-424622
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-424622
E1123 08:18:03.306954   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:17.704253   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-424622: (25.045574142s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-424622 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-424622 --wait=true -v=5 --alsologtostderr: (44.361412007s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-424622
--- PASS: TestMultiNode/serial/RestartKeepsNodes (69.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-424622 node delete m03: (4.656040983s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 stop
E1123 08:19:26.372389   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-424622 stop: (23.810440133s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-424622 status: exit status 7 (98.692811ms)

                                                
                                                
-- stdout --
	multinode-424622
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-424622-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-424622 status --alsologtostderr: exit status 7 (96.513582ms)

                                                
                                                
-- stdout --
	multinode-424622
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-424622-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:19:37.246680  172330 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:19:37.246787  172330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:19:37.246795  172330 out.go:374] Setting ErrFile to fd 2...
	I1123 08:19:37.246800  172330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:19:37.246993  172330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
	I1123 08:19:37.247145  172330 out.go:368] Setting JSON to false
	I1123 08:19:37.247171  172330 mustload.go:66] Loading cluster: multinode-424622
	I1123 08:19:37.247313  172330 notify.go:221] Checking for updates...
	I1123 08:19:37.247489  172330 config.go:182] Loaded profile config "multinode-424622": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:19:37.247523  172330 status.go:174] checking status of multinode-424622 ...
	I1123 08:19:37.247991  172330 cli_runner.go:164] Run: docker container inspect multinode-424622 --format={{.State.Status}}
	I1123 08:19:37.267201  172330 status.go:371] multinode-424622 host status = "Stopped" (err=<nil>)
	I1123 08:19:37.267257  172330 status.go:384] host is not running, skipping remaining checks
	I1123 08:19:37.267271  172330 status.go:176] multinode-424622 status: &{Name:multinode-424622 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:19:37.267336  172330 status.go:174] checking status of multinode-424622-m02 ...
	I1123 08:19:37.267646  172330 cli_runner.go:164] Run: docker container inspect multinode-424622-m02 --format={{.State.Status}}
	I1123 08:19:37.285497  172330 status.go:371] multinode-424622-m02 host status = "Stopped" (err=<nil>)
	I1123 08:19:37.285531  172330 status.go:384] host is not running, skipping remaining checks
	I1123 08:19:37.285538  172330 status.go:176] multinode-424622-m02 status: &{Name:multinode-424622-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (44.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-424622 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-424622 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (43.541611892s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-424622 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (44.16s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-424622
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-424622-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-424622-m02 --driver=docker  --container-runtime=containerd: exit status 14 (81.124563ms)

                                                
                                                
-- stdout --
	* [multinode-424622-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-424622-m02' is duplicated with machine name 'multinode-424622-m02' in profile 'multinode-424622'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-424622-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-424622-m03 --driver=docker  --container-runtime=containerd: (19.484987741s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-424622
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-424622: exit status 80 (295.27869ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-424622 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-424622-m03 already exists in multinode-424622-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-424622-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-424622-m03: (2.353879879s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.27s)

                                                
                                    
x
+
TestPreload (112.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-522632 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-522632 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (45.680863394s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-522632 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-522632 image pull gcr.io/k8s-minikube/busybox: (2.29243601s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-522632
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-522632: (5.754550522s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-522632 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-522632 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (55.686907979s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-522632 image list
helpers_test.go:175: Cleaning up "test-preload-522632" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-522632
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-522632: (2.452859457s)
--- PASS: TestPreload (112.10s)

                                                
                                    
x
+
TestScheduledStopUnix (99.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-211614 --memory=3072 --driver=docker  --container-runtime=containerd
E1123 08:23:03.304929   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-211614 --memory=3072 --driver=docker  --container-runtime=containerd: (23.388336322s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-211614 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:23:03.481963  190593 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:23:03.482239  190593 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:23:03.482248  190593 out.go:374] Setting ErrFile to fd 2...
	I1123 08:23:03.482253  190593 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:23:03.482459  190593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
	I1123 08:23:03.482739  190593 out.go:368] Setting JSON to false
	I1123 08:23:03.482829  190593 mustload.go:66] Loading cluster: scheduled-stop-211614
	I1123 08:23:03.483141  190593 config.go:182] Loaded profile config "scheduled-stop-211614": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:23:03.483218  190593 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/scheduled-stop-211614/config.json ...
	I1123 08:23:03.483394  190593 mustload.go:66] Loading cluster: scheduled-stop-211614
	I1123 08:23:03.483492  190593 config.go:182] Loaded profile config "scheduled-stop-211614": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-211614 -n scheduled-stop-211614
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-211614 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:23:03.874429  190743 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:23:03.874750  190743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:23:03.874760  190743 out.go:374] Setting ErrFile to fd 2...
	I1123 08:23:03.874765  190743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:23:03.875096  190743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
	I1123 08:23:03.875437  190743 out.go:368] Setting JSON to false
	I1123 08:23:03.875659  190743 daemonize_unix.go:73] killing process 190627 as it is an old scheduled stop
	I1123 08:23:03.875788  190743 mustload.go:66] Loading cluster: scheduled-stop-211614
	I1123 08:23:03.876194  190743 config.go:182] Loaded profile config "scheduled-stop-211614": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:23:03.876249  190743 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/scheduled-stop-211614/config.json ...
	I1123 08:23:03.876456  190743 mustload.go:66] Loading cluster: scheduled-stop-211614
	I1123 08:23:03.876619  190743 config.go:182] Loaded profile config "scheduled-stop-211614": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1123 08:23:03.881171   14479 retry.go:31] will retry after 136.905µs: open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/scheduled-stop-211614/pid: no such file or directory
I1123 08:23:03.882290   14479 retry.go:31] will retry after 115.057µs: open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/scheduled-stop-211614/pid: no such file or directory
I1123 08:23:03.883410   14479 retry.go:31] will retry after 154.727µs: open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/scheduled-stop-211614/pid: no such file or directory
I1123 08:23:03.884576   14479 retry.go:31] will retry after 471.168µs: open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/scheduled-stop-211614/pid: no such file or directory
I1123 08:23:03.885691   14479 retry.go:31] will retry after 504.88µs: open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/scheduled-stop-211614/pid: no such file or directory
I1123 08:23:03.886824   14479 retry.go:31] will retry after 873.402µs: open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/scheduled-stop-211614/pid: no such file or directory
I1123 08:23:03.887930   14479 retry.go:31] will retry after 1.473268ms: open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/scheduled-stop-211614/pid: no such file or directory
I1123 08:23:03.890114   14479 retry.go:31] will retry after 2.462518ms: open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/scheduled-stop-211614/pid: no such file or directory
I1123 08:23:03.893294   14479 retry.go:31] will retry after 2.405504ms: open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/scheduled-stop-211614/pid: no such file or directory
I1123 08:23:03.896520   14479 retry.go:31] will retry after 5.618753ms: open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/scheduled-stop-211614/pid: no such file or directory
I1123 08:23:03.902722   14479 retry.go:31] will retry after 3.647095ms: open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/scheduled-stop-211614/pid: no such file or directory
I1123 08:23:03.906937   14479 retry.go:31] will retry after 12.388766ms: open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/scheduled-stop-211614/pid: no such file or directory
I1123 08:23:03.920457   14479 retry.go:31] will retry after 18.891076ms: open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/scheduled-stop-211614/pid: no such file or directory
I1123 08:23:03.939740   14479 retry.go:31] will retry after 18.693695ms: open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/scheduled-stop-211614/pid: no such file or directory
I1123 08:23:03.958992   14479 retry.go:31] will retry after 23.019321ms: open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/scheduled-stop-211614/pid: no such file or directory
I1123 08:23:03.982247   14479 retry.go:31] will retry after 26.064225ms: open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/scheduled-stop-211614/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-211614 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1123 08:23:17.704056   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-211614 -n scheduled-stop-211614
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-211614
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-211614 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:23:29.779428  191633 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:23:29.779539  191633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:23:29.779547  191633 out.go:374] Setting ErrFile to fd 2...
	I1123 08:23:29.779550  191633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:23:29.779751  191633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
	I1123 08:23:29.780030  191633 out.go:368] Setting JSON to false
	I1123 08:23:29.780107  191633 mustload.go:66] Loading cluster: scheduled-stop-211614
	I1123 08:23:29.780406  191633 config.go:182] Loaded profile config "scheduled-stop-211614": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:23:29.780467  191633 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/scheduled-stop-211614/config.json ...
	I1123 08:23:29.780666  191633 mustload.go:66] Loading cluster: scheduled-stop-211614
	I1123 08:23:29.780755  191633 config.go:182] Loaded profile config "scheduled-stop-211614": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-211614
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-211614: exit status 7 (79.904704ms)

                                                
                                                
-- stdout --
	scheduled-stop-211614
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-211614 -n scheduled-stop-211614
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-211614 -n scheduled-stop-211614: exit status 7 (78.31394ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-211614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-211614
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-211614: (4.179074507s)
--- PASS: TestScheduledStopUnix (99.09s)

                                                
                                    
x
+
TestInsufficientStorage (12.26s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-157021 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-157021 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.756663868s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dbc323a5-d7d0-4858-9599-1240e4f21385","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-157021] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"30b1fb61-14c4-48e4-8f43-a7d44240cd76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21966"}}
	{"specversion":"1.0","id":"751a41e8-af2a-411a-8a6b-7698e676676b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"52623e08-4b2d-4bf5-bcf0-e1e2458e5f63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig"}}
	{"specversion":"1.0","id":"de3c4967-29db-4f15-aa61-4391e709f5ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube"}}
	{"specversion":"1.0","id":"7be2da49-6e3d-4bf9-a08f-c8d0eefd953c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e856785b-48eb-4dc2-b12d-191325de7364","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4380b922-d21b-4a44-88c2-0fa5318dcbd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c97950ac-aef7-4155-bc23-1a4c21f91a80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0e1415f6-a050-4bf2-98af-e938944b4f4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f738beac-6c6e-4fd4-94b8-9fd8a676c412","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"906c5499-6330-4a5e-b8f9-aace87f596e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-157021\" primary control-plane node in \"insufficient-storage-157021\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0561f887-365b-4cb6-9963-806cc7aef0f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e29f9e8b-b864-4a65-bc3c-ab0efb027431","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"b386c1e1-31fd-4c6f-8c1c-6eec6db4c62c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-157021 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-157021 --output=json --layout=cluster: exit status 7 (295.861426ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-157021","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-157021","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 08:24:29.164304  193880 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-157021" does not appear in /home/jenkins/minikube-integration/21966-10922/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-157021 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-157021 --output=json --layout=cluster: exit status 7 (297.091447ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-157021","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-157021","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 08:24:29.462374  193989 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-157021" does not appear in /home/jenkins/minikube-integration/21966-10922/kubeconfig
	E1123 08:24:29.472663  193989 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/insufficient-storage-157021/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-157021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-157021
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-157021: (1.908693088s)
--- PASS: TestInsufficientStorage (12.26s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (45.16s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.317927986 start -p running-upgrade-009804 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.317927986 start -p running-upgrade-009804 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (20.699286232s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-009804 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1123 08:28:03.304764   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:28:17.701254   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-009804 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (19.413730048s)
helpers_test.go:175: Cleaning up "running-upgrade-009804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-009804
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-009804: (1.976588427s)
--- PASS: TestRunningBinaryUpgrade (45.16s)

                                                
                                    
x
+
TestKubernetesUpgrade (325.48s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-644962 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-644962 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (31.333746477s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-644962
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-644962: (1.299170866s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-644962 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-644962 status --format={{.Host}}: exit status 7 (86.817589ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-644962 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-644962 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m40.486889853s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-644962 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-644962 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-644962 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (83.216181ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-644962] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-644962
	    minikube start -p kubernetes-upgrade-644962 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6449622 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-644962 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-644962 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-644962 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (9.971313161s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-644962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-644962
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-644962: (2.154337654s)
--- PASS: TestKubernetesUpgrade (325.48s)

                                                
                                    
x
+
TestMissingContainerUpgrade (117.01s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.739918836 start -p missing-upgrade-685997 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.739918836 start -p missing-upgrade-685997 --memory=3072 --driver=docker  --container-runtime=containerd: (1m2.440191842s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-685997
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-685997
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-685997 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-685997 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (47.440579622s)
helpers_test.go:175: Cleaning up "missing-upgrade-685997" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-685997
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-685997: (3.774321365s)
--- PASS: TestMissingContainerUpgrade (117.01s)

                                                
                                    
x
+
TestPause/serial/Start (53.9s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-745270 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-745270 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (53.899230604s)
--- PASS: TestPause/serial/Start (53.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-823406 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-823406 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (100.881491ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-823406] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-823406 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1123 08:24:40.769791   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-823406 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (32.853610467s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-823406 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-823406 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-823406 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (14.364358675s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-823406 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-823406 status -o json: exit status 2 (331.8328ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-823406","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-823406
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-823406: (2.082431285s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-823406 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-823406 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.435247779s)
--- PASS: TestNoKubernetes/serial/Start (7.44s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.22s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-745270 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-745270 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.197125357s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21966-10922/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-823406 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-823406 "sudo systemctl is-active --quiet service kubelet": exit status 1 (386.153273ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.614057319s)
--- PASS: TestNoKubernetes/serial/ProfileList (2.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-823406
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-823406: (1.301826747s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestPause/serial/Pause (0.85s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-745270 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-823406 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-823406 --driver=docker  --container-runtime=containerd: (7.087953517s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.09s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-745270 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-745270 --output=json --layout=cluster: exit status 2 (373.512116ms)

                                                
                                                
-- stdout --
	{"Name":"pause-745270","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-745270","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-745270 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.79s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-745270 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.79s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.77s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-745270 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-745270 --alsologtostderr -v=5: (2.767233354s)
--- PASS: TestPause/serial/DeletePaused (2.77s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.8s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-745270
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-745270: exit status 1 (18.446318ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-745270: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-823406 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-823406 "sudo systemctl is-active --quiet service kubelet": exit status 1 (307.701148ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (97.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2337345326 start -p stopped-upgrade-273955 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2337345326 start -p stopped-upgrade-273955 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (1m4.716674476s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2337345326 -p stopped-upgrade-273955 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2337345326 -p stopped-upgrade-273955 stop: (3.991408922s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-273955 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-273955 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (28.606902486s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (97.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-273955
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-273955: (1.222062146s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-366757 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-366757 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (167.107675ms)

                                                
                                                
-- stdout --
	* [false-366757] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:27:31.080587  240311 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:27:31.080852  240311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:27:31.080861  240311 out.go:374] Setting ErrFile to fd 2...
	I1123 08:27:31.080866  240311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:27:31.081054  240311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10922/.minikube/bin
	I1123 08:27:31.081567  240311 out.go:368] Setting JSON to false
	I1123 08:27:31.082773  240311 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4189,"bootTime":1763882262,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:27:31.082829  240311 start.go:143] virtualization: kvm guest
	I1123 08:27:31.084695  240311 out.go:179] * [false-366757] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:27:31.085948  240311 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:27:31.085952  240311 notify.go:221] Checking for updates...
	I1123 08:27:31.088431  240311 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:27:31.089610  240311 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10922/kubeconfig
	I1123 08:27:31.091028  240311 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10922/.minikube
	I1123 08:27:31.092123  240311 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:27:31.093272  240311 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:27:31.095068  240311 config.go:182] Loaded profile config "cert-expiration-215889": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:27:31.095178  240311 config.go:182] Loaded profile config "kubernetes-upgrade-644962": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:27:31.095263  240311 config.go:182] Loaded profile config "missing-upgrade-685997": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1123 08:27:31.095353  240311 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:27:31.119716  240311 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:27:31.119818  240311 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:27:31.181726  240311 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:27:31.170842003 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:27:31.181836  240311 docker.go:319] overlay module found
	I1123 08:27:31.183498  240311 out.go:179] * Using the docker driver based on user configuration
	I1123 08:27:31.184778  240311 start.go:309] selected driver: docker
	I1123 08:27:31.184793  240311 start.go:927] validating driver "docker" against <nil>
	I1123 08:27:31.184803  240311 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:27:31.186542  240311 out.go:203] 
	W1123 08:27:31.187738  240311 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1123 08:27:31.188949  240311 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-366757 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-366757

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-366757

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-366757

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-366757

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-366757

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-366757

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-366757

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-366757

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-366757

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-366757

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-366757

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-366757" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-366757" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-10922/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:25:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-215889
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-10922/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:26:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-644962
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-10922/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:26:55 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: missing-upgrade-685997
contexts:
- context:
cluster: cert-expiration-215889
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:25:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-215889
name: cert-expiration-215889
- context:
cluster: kubernetes-upgrade-644962
user: kubernetes-upgrade-644962
name: kubernetes-upgrade-644962
- context:
cluster: missing-upgrade-685997
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:26:55 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-685997
name: missing-upgrade-685997
current-context: ""
kind: Config
users:
- name: cert-expiration-215889
user:
client-certificate: /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/cert-expiration-215889/client.crt
client-key: /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/cert-expiration-215889/client.key
- name: kubernetes-upgrade-644962
user:
client-certificate: /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/kubernetes-upgrade-644962/client.crt
client-key: /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/kubernetes-upgrade-644962/client.key
- name: missing-upgrade-685997
user:
client-certificate: /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/missing-upgrade-685997/client.crt
client-key: /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/missing-upgrade-685997/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-366757

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-366757"

                                                
                                                
----------------------- debugLogs end: false-366757 [took: 3.182336349s] --------------------------------
helpers_test.go:175: Cleaning up "false-366757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-366757
--- PASS: TestNetworkPlugins/group/false (3.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-366757 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-366757 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (43.519988522s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-366757 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-366757 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (42.447702682s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-366757 "pgrep -a kubelet"
I1123 08:28:32.343360   14479 config.go:182] Loaded profile config "auto-366757": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-366757 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fbhgc" [1f44de00-b0af-4833-bd54-3b2d9bf4652e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fbhgc" [1f44de00-b0af-4833-bd54-3b2d9bf4652e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004563434s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (53.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-366757 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-366757 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (53.287177382s)
--- PASS: TestNetworkPlugins/group/calico/Start (53.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-366757 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-366757 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-366757 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (48.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-366757 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-366757 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (48.916383609s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (48.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-brd9b" [e24c73e4-46d3-4b0f-9dd7-892756ae03b6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003851705s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-366757 "pgrep -a kubelet"
I1123 08:29:12.180230   14479 config.go:182] Loaded profile config "kindnet-366757": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-366757 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gc2f7" [9cf2e3b2-413e-46b8-be5b-10b813f39cb6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gc2f7" [9cf2e3b2-413e-46b8-be5b-10b813f39cb6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003958259s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-366757 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-366757 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-366757 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-dfzw2" [42fb893d-9095-4fed-9956-ce2b102b1ef0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005231334s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-366757 "pgrep -a kubelet"
I1123 08:29:37.671981   14479 config.go:182] Loaded profile config "calico-366757": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-366757 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jf4nt" [43755339-1b83-4088-b52a-c825330646c1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jf4nt" [43755339-1b83-4088-b52a-c825330646c1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.117790127s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (64.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-366757 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-366757 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m4.203380026s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (64.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-366757 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-366757 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-366757 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-366757 "pgrep -a kubelet"
I1123 08:29:51.284482   14479 config.go:182] Loaded profile config "custom-flannel-366757": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-366757 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h7w9s" [3072e321-7e38-4a57-877a-41d2f3cf8966] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-h7w9s" [3072e321-7e38-4a57-877a-41d2f3cf8966] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003926706s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-366757 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-366757 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-366757 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-366757 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-366757 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (54.972172738s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-366757 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-366757 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m7.084277665s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-366757 "pgrep -a kubelet"
I1123 08:30:46.714209   14479 config.go:182] Loaded profile config "enable-default-cni-366757": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-366757 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7pvjn" [a0e587ee-4fe1-4c2f-a000-b79cdd50f58d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7pvjn" [a0e587ee-4fe1-4c2f-a000-b79cdd50f58d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004309625s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-366757 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-366757 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-366757 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-rr6vj" [46e7c88a-b744-4601-9ff5-bddd8d4684fe] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004720584s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-366757 "pgrep -a kubelet"
I1123 08:31:10.446406   14479 config.go:182] Loaded profile config "flannel-366757": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-366757 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zrzzz" [ed6582cb-f2f0-46bc-a65b-fc48db291c3d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zrzzz" [ed6582cb-f2f0-46bc-a65b-fc48db291c3d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.045496411s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (55.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-644335 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-644335 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (55.717864351s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (55.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-366757 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-366757 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-366757 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-366757 "pgrep -a kubelet"
I1123 08:31:30.551474   14479 config.go:182] Loaded profile config "bridge-366757": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-366757 replace --force -f testdata/netcat-deployment.yaml
I1123 08:31:31.272262   14479 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1123 08:31:31.276180   14479 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rnbb5" [21fecf99-b266-42ec-8d45-aeac8f6f0631] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rnbb5" [21fecf99-b266-42ec-8d45-aeac8f6f0631] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.005004464s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (55.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-073500 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-073500 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (55.848130648s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (55.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-366757 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-366757 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-366757 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (46.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-329854 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-329854 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (46.266349305s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (46.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-589368 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-589368 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (43.559965606s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-644335 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-644335 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-644335 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-644335 --alsologtostderr -v=3: (12.171529223s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-644335 -n old-k8s-version-644335
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-644335 -n old-k8s-version-644335: exit status 7 (82.080735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-644335 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-644335 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-644335 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (46.236576697s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-644335 -n old-k8s-version-644335
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-329854 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-329854 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-329854 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-329854 --alsologtostderr -v=3: (12.206888792s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-073500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-073500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.051396677s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-073500 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-073500 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-073500 --alsologtostderr -v=3: (12.142490866s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-329854 -n embed-certs-329854
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-329854 -n embed-certs-329854: exit status 7 (84.310754ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-329854 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-329854 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-329854 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (50.331169492s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-329854 -n embed-certs-329854
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-073500 -n no-preload-073500
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-073500 -n no-preload-073500: exit status 7 (98.392263ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-073500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-073500 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-073500 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (51.851422179s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-073500 -n no-preload-073500
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-589368 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-589368 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-589368 --alsologtostderr -v=3
E1123 08:33:03.304633   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/addons-668375/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-589368 --alsologtostderr -v=3: (12.610848979s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-589368 -n default-k8s-diff-port-589368
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-589368 -n default-k8s-diff-port-589368: exit status 7 (95.468056ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-589368 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-589368 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1123 08:33:17.701332   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/functional-410903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-589368 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (44.6826782s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-589368 -n default-k8s-diff-port-589368
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-q4x6g" [916aef9d-3a0a-43a2-9c6e-656186ac521e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003823573s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-q4x6g" [916aef9d-3a0a-43a2-9c6e-656186ac521e] Running
E1123 08:33:32.551302   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/auto-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:33:32.557745   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/auto-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:33:32.569611   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/auto-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:33:32.591367   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/auto-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:33:32.632814   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/auto-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:33:32.714285   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/auto-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:33:32.876152   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/auto-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:33:33.197821   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/auto-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:33:33.839920   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/auto-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:33:35.121675   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/auto-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004045393s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-644335 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-644335 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-644335 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-644335 -n old-k8s-version-644335
E1123 08:33:37.683547   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/auto-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-644335 -n old-k8s-version-644335: exit status 2 (343.508903ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-644335 -n old-k8s-version-644335
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-644335 -n old-k8s-version-644335: exit status 2 (341.748836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-644335 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-644335 -n old-k8s-version-644335
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-644335 -n old-k8s-version-644335
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-611166 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1123 08:33:42.805045   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/auto-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-611166 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (26.802129048s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-g6h22" [a3587617-dac4-48a0-a8f1-9662fb0cca62] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003886369s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d2tt2" [fbef51c8-5b22-4bf9-b2b1-5b457f89463d] Running
E1123 08:33:53.047049   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/auto-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00410675s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-g6h22" [a3587617-dac4-48a0-a8f1-9662fb0cca62] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003462918s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-329854 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d2tt2" [fbef51c8-5b22-4bf9-b2b1-5b457f89463d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004037673s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-073500 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-329854 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-329854 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-329854 -n embed-certs-329854
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-329854 -n embed-certs-329854: exit status 2 (385.433313ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-329854 -n embed-certs-329854
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-329854 -n embed-certs-329854: exit status 2 (366.65428ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-329854 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-329854 -n embed-certs-329854
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-329854 -n embed-certs-329854
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-g5wjk" [57eabcc8-ff06-4d05-8079-477d26f4f887] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004926647s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-073500 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-073500 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-073500 -n no-preload-073500
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-073500 -n no-preload-073500: exit status 2 (359.273529ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-073500 -n no-preload-073500
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-073500 -n no-preload-073500: exit status 2 (373.441293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-073500 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-073500 -n no-preload-073500
E1123 08:34:05.873863   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/kindnet-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:34:05.880251   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/kindnet-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:34:05.891727   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/kindnet-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:34:05.913998   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/kindnet-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-073500 -n no-preload-073500
E1123 08:34:05.955429   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/kindnet-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:34:06.037321   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/kindnet-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:34:06.198734   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/kindnet-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-g5wjk" [57eabcc8-ff06-4d05-8079-477d26f4f887] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003418118s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-589368 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-611166 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-589368 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-589368 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-589368 -n default-k8s-diff-port-589368
E1123 08:34:11.005922   14479 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/kindnet-366757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-589368 -n default-k8s-diff-port-589368: exit status 2 (315.158958ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-589368 -n default-k8s-diff-port-589368
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-589368 -n default-k8s-diff-port-589368: exit status 2 (328.041713ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-589368 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-589368 -n default-k8s-diff-port-589368
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-589368 -n default-k8s-diff-port-589368
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-611166 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-611166 --alsologtostderr -v=3: (1.377397814s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-611166 -n newest-cni-611166
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-611166 -n newest-cni-611166: exit status 7 (86.086637ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-611166 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-611166 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-611166 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (9.611451255s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-611166 -n newest-cni-611166
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-611166 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-611166 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-611166 -n newest-cni-611166
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-611166 -n newest-cni-611166: exit status 2 (312.081447ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-611166 -n newest-cni-611166
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-611166 -n newest-cni-611166: exit status 2 (318.938158ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-611166 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-611166 -n newest-cni-611166
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-611166 -n newest-cni-611166
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.59s)

                                                
                                    

Test skip (26/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-366757 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-366757

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-366757

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-366757

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-366757

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-366757

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-366757

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-366757

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-366757

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-366757

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-366757

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-366757

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-366757" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-366757" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-10922/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:25:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-215889
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-10922/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:26:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-644962
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-10922/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:26:55 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: missing-upgrade-685997
contexts:
- context:
cluster: cert-expiration-215889
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:25:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-215889
name: cert-expiration-215889
- context:
cluster: kubernetes-upgrade-644962
user: kubernetes-upgrade-644962
name: kubernetes-upgrade-644962
- context:
cluster: missing-upgrade-685997
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:26:55 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-685997
name: missing-upgrade-685997
current-context: ""
kind: Config
users:
- name: cert-expiration-215889
user:
client-certificate: /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/cert-expiration-215889/client.crt
client-key: /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/cert-expiration-215889/client.key
- name: kubernetes-upgrade-644962
user:
client-certificate: /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/kubernetes-upgrade-644962/client.crt
client-key: /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/kubernetes-upgrade-644962/client.key
- name: missing-upgrade-685997
user:
client-certificate: /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/missing-upgrade-685997/client.crt
client-key: /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/missing-upgrade-685997/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-366757

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-366757"

                                                
                                                
----------------------- debugLogs end: kubenet-366757 [took: 3.213193242s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-366757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-366757
--- SKIP: TestNetworkPlugins/group/kubenet (3.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-366757 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-366757

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-366757

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-366757

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-366757

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-366757

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-366757

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-366757

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-366757

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-366757

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-366757

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-366757

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-366757" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-366757

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-366757

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-366757

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-366757

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-366757" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-366757" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-10922/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:25:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-215889
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-10922/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:26:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-644962
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-10922/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:26:55 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: missing-upgrade-685997
contexts:
- context:
cluster: cert-expiration-215889
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:25:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-215889
name: cert-expiration-215889
- context:
cluster: kubernetes-upgrade-644962
user: kubernetes-upgrade-644962
name: kubernetes-upgrade-644962
- context:
cluster: missing-upgrade-685997
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:26:55 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-685997
name: missing-upgrade-685997
current-context: ""
kind: Config
users:
- name: cert-expiration-215889
user:
client-certificate: /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/cert-expiration-215889/client.crt
client-key: /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/cert-expiration-215889/client.key
- name: kubernetes-upgrade-644962
user:
client-certificate: /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/kubernetes-upgrade-644962/client.crt
client-key: /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/kubernetes-upgrade-644962/client.key
- name: missing-upgrade-685997
user:
client-certificate: /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/missing-upgrade-685997/client.crt
client-key: /home/jenkins/minikube-integration/21966-10922/.minikube/profiles/missing-upgrade-685997/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-366757

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-366757" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-366757"

                                                
                                                
----------------------- debugLogs end: cilium-366757 [took: 3.555148748s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-366757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-366757
--- SKIP: TestNetworkPlugins/group/cilium (3.72s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-900754" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-900754
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard