Test Report: Docker_Linux_containerd_arm64 21966

                    
                      f7c9a93757611cb83a7bfb680dda9add42d627cb:2025-11-23:42464
                    
                

Test fail (4/333)

Order failed test Duration
301 TestStartStop/group/old-k8s-version/serial/DeployApp 14.04
314 TestStartStop/group/no-preload/serial/DeployApp 12.85
318 TestStartStop/group/embed-certs/serial/DeployApp 15.9
341 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 16.36
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (14.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-180638 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [54457203-a4b0-4bfe-b7e6-9804ec70353f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [54457203-a4b0-4bfe-b7e6-9804ec70353f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003159432s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-180638 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-180638
helpers_test.go:243: (dbg) docker inspect old-k8s-version-180638:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3fb449072f419f1d1ff9eebb56f96c76cc24ab8ceb8213db71616f0ddddcbb9f",
	        "Created": "2025-11-23T08:41:19.865592877Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 197224,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:41:19.943635138Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/3fb449072f419f1d1ff9eebb56f96c76cc24ab8ceb8213db71616f0ddddcbb9f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3fb449072f419f1d1ff9eebb56f96c76cc24ab8ceb8213db71616f0ddddcbb9f/hostname",
	        "HostsPath": "/var/lib/docker/containers/3fb449072f419f1d1ff9eebb56f96c76cc24ab8ceb8213db71616f0ddddcbb9f/hosts",
	        "LogPath": "/var/lib/docker/containers/3fb449072f419f1d1ff9eebb56f96c76cc24ab8ceb8213db71616f0ddddcbb9f/3fb449072f419f1d1ff9eebb56f96c76cc24ab8ceb8213db71616f0ddddcbb9f-json.log",
	        "Name": "/old-k8s-version-180638",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-180638:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-180638",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3fb449072f419f1d1ff9eebb56f96c76cc24ab8ceb8213db71616f0ddddcbb9f",
	                "LowerDir": "/var/lib/docker/overlay2/3a0f954d6f7082ad577dca92fa6658b1e327bb820ce9a801d55d584f14165f01-init/diff:/var/lib/docker/overlay2/88c30082a717909d357f7d81c88a05ce3487a40d372ee6dc57fb9f012e0502da/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a0f954d6f7082ad577dca92fa6658b1e327bb820ce9a801d55d584f14165f01/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a0f954d6f7082ad577dca92fa6658b1e327bb820ce9a801d55d584f14165f01/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a0f954d6f7082ad577dca92fa6658b1e327bb820ce9a801d55d584f14165f01/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-180638",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-180638/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-180638",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-180638",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-180638",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c7b0d9b425062d52a0c8052c45b2a62780ff3f6f2620c50e9e88251d56098ed9",
	            "SandboxKey": "/var/run/docker/netns/c7b0d9b42506",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-180638": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:cc:5c:df:67:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ec0f96b222364b6472248735ae9433b2f33bdeaa152953368412a68215eb42c4",
	                    "EndpointID": "20998764ba69f988f94705bb48be4dc33edbb29c350250a4be2539cea69e130e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-180638",
	                        "3fb449072f41"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-180638 -n old-k8s-version-180638
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-180638 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-180638 logs -n 25: (1.230253972s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-440243 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                   │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo docker system info                                                                                                                                                                                                            │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo containerd config dump                                                                                                                                                                                                        │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo crio config                                                                                                                                                                                                                   │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ delete  │ -p cilium-440243                                                                                                                                                                                                                                    │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │ 23 Nov 25 08:39 UTC │
	│ start   │ -p cert-expiration-119748 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-119748   │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │ 23 Nov 25 08:40 UTC │
	│ ssh     │ force-systemd-env-760522 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-760522 │ jenkins │ v1.37.0 │ 23 Nov 25 08:40 UTC │ 23 Nov 25 08:40 UTC │
	│ delete  │ -p force-systemd-env-760522                                                                                                                                                                                                                         │ force-systemd-env-760522 │ jenkins │ v1.37.0 │ 23 Nov 25 08:40 UTC │ 23 Nov 25 08:40 UTC │
	│ start   │ -p cert-options-106536 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:40 UTC │ 23 Nov 25 08:41 UTC │
	│ ssh     │ cert-options-106536 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ ssh     │ -p cert-options-106536 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ delete  │ -p cert-options-106536                                                                                                                                                                                                                              │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ start   │ -p old-k8s-version-180638 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:42 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:41:13
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:41:13.503798  196829 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:41:13.504001  196829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:41:13.504037  196829 out.go:374] Setting ErrFile to fd 2...
	I1123 08:41:13.504057  196829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:41:13.504449  196829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
	I1123 08:41:13.504989  196829 out.go:368] Setting JSON to false
	I1123 08:41:13.507307  196829 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5022,"bootTime":1763882251,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 08:41:13.507402  196829 start.go:143] virtualization:  
	I1123 08:41:13.511220  196829 out.go:179] * [old-k8s-version-180638] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:41:13.515732  196829 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:41:13.516085  196829 notify.go:221] Checking for updates...
	I1123 08:41:13.523195  196829 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:41:13.526521  196829 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:41:13.529705  196829 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube
	I1123 08:41:13.532894  196829 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:41:13.536018  196829 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:41:13.539629  196829 config.go:182] Loaded profile config "cert-expiration-119748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:41:13.539739  196829 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:41:13.574366  196829 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:41:13.574516  196829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:41:13.638032  196829 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:41:13.62864309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:41:13.638135  196829 docker.go:319] overlay module found
	I1123 08:41:13.643635  196829 out.go:179] * Using the docker driver based on user configuration
	I1123 08:41:13.646835  196829 start.go:309] selected driver: docker
	I1123 08:41:13.646859  196829 start.go:927] validating driver "docker" against <nil>
	I1123 08:41:13.646879  196829 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:41:13.647612  196829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:41:13.702166  196829 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:41:13.693228668 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:41:13.702317  196829 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:41:13.702534  196829 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:41:13.705700  196829 out.go:179] * Using Docker driver with root privileges
	I1123 08:41:13.708681  196829 cni.go:84] Creating CNI manager for ""
	I1123 08:41:13.708750  196829 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:41:13.708770  196829 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:41:13.708863  196829 start.go:353] cluster config:
	{Name:old-k8s-version-180638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-180638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:41:13.711891  196829 out.go:179] * Starting "old-k8s-version-180638" primary control-plane node in "old-k8s-version-180638" cluster
	I1123 08:41:13.714733  196829 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:41:13.717633  196829 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:41:13.720589  196829 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:41:13.720638  196829 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1123 08:41:13.720665  196829 cache.go:65] Caching tarball of preloaded images
	I1123 08:41:13.720676  196829 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:41:13.720783  196829 preload.go:238] Found /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1123 08:41:13.720794  196829 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1123 08:41:13.720923  196829 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/config.json ...
	I1123 08:41:13.720948  196829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/config.json: {Name:mk3fa6091d320fb60049f236674c350f36f8b1c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:41:13.740066  196829 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:41:13.740090  196829 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:41:13.740110  196829 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:41:13.740140  196829 start.go:360] acquireMachinesLock for old-k8s-version-180638: {Name:mk02adabcbe3b4194eb9b9cf13dfbc9bffd5d61a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:41:13.740251  196829 start.go:364] duration metric: took 92.325µs to acquireMachinesLock for "old-k8s-version-180638"
	I1123 08:41:13.740280  196829 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-180638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-180638 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:41:13.740345  196829 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:41:13.743708  196829 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:41:13.743928  196829 start.go:159] libmachine.API.Create for "old-k8s-version-180638" (driver="docker")
	I1123 08:41:13.743964  196829 client.go:173] LocalClient.Create starting
	I1123 08:41:13.744044  196829 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem
	I1123 08:41:13.744081  196829 main.go:143] libmachine: Decoding PEM data...
	I1123 08:41:13.744099  196829 main.go:143] libmachine: Parsing certificate...
	I1123 08:41:13.744156  196829 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem
	I1123 08:41:13.744179  196829 main.go:143] libmachine: Decoding PEM data...
	I1123 08:41:13.744191  196829 main.go:143] libmachine: Parsing certificate...
	I1123 08:41:13.744566  196829 cli_runner.go:164] Run: docker network inspect old-k8s-version-180638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:41:13.760425  196829 cli_runner.go:211] docker network inspect old-k8s-version-180638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:41:13.760511  196829 network_create.go:284] running [docker network inspect old-k8s-version-180638] to gather additional debugging logs...
	I1123 08:41:13.760531  196829 cli_runner.go:164] Run: docker network inspect old-k8s-version-180638
	W1123 08:41:13.775922  196829 cli_runner.go:211] docker network inspect old-k8s-version-180638 returned with exit code 1
	I1123 08:41:13.775955  196829 network_create.go:287] error running [docker network inspect old-k8s-version-180638]: docker network inspect old-k8s-version-180638: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-180638 not found
	I1123 08:41:13.775968  196829 network_create.go:289] output of [docker network inspect old-k8s-version-180638]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-180638 not found
	
	** /stderr **
	I1123 08:41:13.776076  196829 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:41:13.792199  196829 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a946cc9c0edf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:ea:52:17:a9:7a} reservation:<nil>}
	I1123 08:41:13.792559  196829 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fb33daef15c9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:08:1d:d1:c6:df} reservation:<nil>}
	I1123 08:41:13.792931  196829 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb61edac6088 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e6:64:59:e2:c3:5a} reservation:<nil>}
	I1123 08:41:13.793382  196829 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1e140}
	I1123 08:41:13.793443  196829 network_create.go:124] attempt to create docker network old-k8s-version-180638 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 08:41:13.793513  196829 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-180638 old-k8s-version-180638
	I1123 08:41:13.859515  196829 network_create.go:108] docker network old-k8s-version-180638 192.168.76.0/24 created
	I1123 08:41:13.859564  196829 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-180638" container
	I1123 08:41:13.859638  196829 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:41:13.877503  196829 cli_runner.go:164] Run: docker volume create old-k8s-version-180638 --label name.minikube.sigs.k8s.io=old-k8s-version-180638 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:41:13.898930  196829 oci.go:103] Successfully created a docker volume old-k8s-version-180638
	I1123 08:41:13.899032  196829 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-180638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-180638 --entrypoint /usr/bin/test -v old-k8s-version-180638:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:41:14.458747  196829 oci.go:107] Successfully prepared a docker volume old-k8s-version-180638
	I1123 08:41:14.458805  196829 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:41:14.458814  196829 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:41:14.458892  196829 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-180638:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 08:41:19.794152  196829 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-180638:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.335195842s)
	I1123 08:41:19.794189  196829 kic.go:203] duration metric: took 5.335371475s to extract preloaded images to volume ...
	W1123 08:41:19.794328  196829 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:41:19.794436  196829 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:41:19.848844  196829 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-180638 --name old-k8s-version-180638 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-180638 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-180638 --network old-k8s-version-180638 --ip 192.168.76.2 --volume old-k8s-version-180638:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:41:20.177907  196829 cli_runner.go:164] Run: docker container inspect old-k8s-version-180638 --format={{.State.Running}}
	I1123 08:41:20.204948  196829 cli_runner.go:164] Run: docker container inspect old-k8s-version-180638 --format={{.State.Status}}
	I1123 08:41:20.227539  196829 cli_runner.go:164] Run: docker exec old-k8s-version-180638 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:41:20.289856  196829 oci.go:144] the created container "old-k8s-version-180638" has a running status.
	I1123 08:41:20.289891  196829 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-2339/.minikube/machines/old-k8s-version-180638/id_rsa...
	I1123 08:41:20.448285  196829 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-2339/.minikube/machines/old-k8s-version-180638/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:41:20.475665  196829 cli_runner.go:164] Run: docker container inspect old-k8s-version-180638 --format={{.State.Status}}
	I1123 08:41:20.521617  196829 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:41:20.521635  196829 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-180638 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:41:20.589359  196829 cli_runner.go:164] Run: docker container inspect old-k8s-version-180638 --format={{.State.Status}}
	I1123 08:41:20.625639  196829 machine.go:94] provisionDockerMachine start ...
	I1123 08:41:20.625720  196829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180638
	I1123 08:41:20.654376  196829 main.go:143] libmachine: Using SSH client type: native
	I1123 08:41:20.655192  196829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1123 08:41:20.655341  196829 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:41:20.656290  196829 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 08:41:23.816940  196829 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-180638
	
	I1123 08:41:23.816964  196829 ubuntu.go:182] provisioning hostname "old-k8s-version-180638"
	I1123 08:41:23.817040  196829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180638
	I1123 08:41:23.833840  196829 main.go:143] libmachine: Using SSH client type: native
	I1123 08:41:23.834172  196829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1123 08:41:23.834187  196829 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-180638 && echo "old-k8s-version-180638" | sudo tee /etc/hostname
	I1123 08:41:23.999609  196829 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-180638
	
	I1123 08:41:23.999698  196829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180638
	I1123 08:41:24.020254  196829 main.go:143] libmachine: Using SSH client type: native
	I1123 08:41:24.020584  196829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1123 08:41:24.020601  196829 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-180638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-180638/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-180638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:41:24.185924  196829 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:41:24.185946  196829 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-2339/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-2339/.minikube}
	I1123 08:41:24.185967  196829 ubuntu.go:190] setting up certificates
	I1123 08:41:24.185976  196829 provision.go:84] configureAuth start
	I1123 08:41:24.186052  196829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-180638
	I1123 08:41:24.215320  196829 provision.go:143] copyHostCerts
	I1123 08:41:24.215378  196829 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem, removing ...
	I1123 08:41:24.215387  196829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem
	I1123 08:41:24.215451  196829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem (1078 bytes)
	I1123 08:41:24.215548  196829 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem, removing ...
	I1123 08:41:24.215553  196829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem
	I1123 08:41:24.215581  196829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem (1123 bytes)
	I1123 08:41:24.215633  196829 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem, removing ...
	I1123 08:41:24.215638  196829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem
	I1123 08:41:24.215661  196829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem (1675 bytes)
	I1123 08:41:24.216026  196829 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-180638 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-180638]
	I1123 08:41:24.624778  196829 provision.go:177] copyRemoteCerts
	I1123 08:41:24.624888  196829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:41:24.624959  196829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180638
	I1123 08:41:24.646886  196829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/old-k8s-version-180638/id_rsa Username:docker}
	I1123 08:41:24.753771  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:41:24.771993  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:41:24.790069  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1123 08:41:24.807496  196829 provision.go:87] duration metric: took 621.497153ms to configureAuth
	I1123 08:41:24.807563  196829 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:41:24.807769  196829 config.go:182] Loaded profile config "old-k8s-version-180638": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:41:24.807806  196829 machine.go:97] duration metric: took 4.182148274s to provisionDockerMachine
	I1123 08:41:24.807853  196829 client.go:176] duration metric: took 11.063877137s to LocalClient.Create
	I1123 08:41:24.807895  196829 start.go:167] duration metric: took 11.063966541s to libmachine.API.Create "old-k8s-version-180638"
	I1123 08:41:24.807925  196829 start.go:293] postStartSetup for "old-k8s-version-180638" (driver="docker")
	I1123 08:41:24.807964  196829 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:41:24.808042  196829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:41:24.808096  196829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180638
	I1123 08:41:24.825195  196829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/old-k8s-version-180638/id_rsa Username:docker}
	I1123 08:41:24.930003  196829 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:41:24.933389  196829 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:41:24.933440  196829 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:41:24.933453  196829 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-2339/.minikube/addons for local assets ...
	I1123 08:41:24.933516  196829 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-2339/.minikube/files for local assets ...
	I1123 08:41:24.933597  196829 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem -> 41512.pem in /etc/ssl/certs
	I1123 08:41:24.933700  196829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:41:24.941173  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem --> /etc/ssl/certs/41512.pem (1708 bytes)
	I1123 08:41:24.960763  196829 start.go:296] duration metric: took 152.794115ms for postStartSetup
	I1123 08:41:24.961139  196829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-180638
	I1123 08:41:24.978306  196829 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/config.json ...
	I1123 08:41:24.978587  196829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:41:24.978642  196829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180638
	I1123 08:41:24.994847  196829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/old-k8s-version-180638/id_rsa Username:docker}
	I1123 08:41:25.098792  196829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:41:25.103719  196829 start.go:128] duration metric: took 11.363355721s to createHost
	I1123 08:41:25.103745  196829 start.go:83] releasing machines lock for "old-k8s-version-180638", held for 11.363481187s
	I1123 08:41:25.103820  196829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-180638
	I1123 08:41:25.123598  196829 ssh_runner.go:195] Run: cat /version.json
	I1123 08:41:25.123615  196829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:41:25.123646  196829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180638
	I1123 08:41:25.123677  196829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180638
	I1123 08:41:25.149385  196829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/old-k8s-version-180638/id_rsa Username:docker}
	I1123 08:41:25.159257  196829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/old-k8s-version-180638/id_rsa Username:docker}
	I1123 08:41:25.253035  196829 ssh_runner.go:195] Run: systemctl --version
	I1123 08:41:25.348445  196829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:41:25.352830  196829 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:41:25.352933  196829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:41:25.381383  196829 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:41:25.381469  196829 start.go:496] detecting cgroup driver to use...
	I1123 08:41:25.381508  196829 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:41:25.381570  196829 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:41:25.397040  196829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:41:25.410260  196829 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:41:25.410362  196829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:41:25.428008  196829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:41:25.447082  196829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:41:25.620588  196829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:41:25.749588  196829 docker.go:234] disabling docker service ...
	I1123 08:41:25.749661  196829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:41:25.772076  196829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:41:25.784914  196829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:41:25.899082  196829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:41:26.009981  196829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:41:26.025315  196829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:41:26.039953  196829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1123 08:41:26.049471  196829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:41:26.059847  196829 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 08:41:26.060009  196829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 08:41:26.069667  196829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:41:26.079903  196829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:41:26.089816  196829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:41:26.099752  196829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:41:26.108060  196829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:41:26.117585  196829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:41:26.126366  196829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:41:26.135803  196829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:41:26.143649  196829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:41:26.151206  196829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:41:26.281475  196829 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:41:26.394263  196829 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:41:26.394379  196829 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:41:26.398397  196829 start.go:564] Will wait 60s for crictl version
	I1123 08:41:26.398525  196829 ssh_runner.go:195] Run: which crictl
	I1123 08:41:26.402050  196829 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:41:26.433447  196829 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:41:26.433548  196829 ssh_runner.go:195] Run: containerd --version
	I1123 08:41:26.456534  196829 ssh_runner.go:195] Run: containerd --version
	I1123 08:41:26.486458  196829 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1123 08:41:26.489565  196829 cli_runner.go:164] Run: docker network inspect old-k8s-version-180638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:41:26.507660  196829 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 08:41:26.511689  196829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:41:26.521591  196829 kubeadm.go:884] updating cluster {Name:old-k8s-version-180638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-180638 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:41:26.521716  196829 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:41:26.521782  196829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:41:26.552790  196829 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:41:26.552815  196829 containerd.go:534] Images already preloaded, skipping extraction
	I1123 08:41:26.552879  196829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:41:26.589503  196829 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:41:26.589526  196829 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:41:26.589533  196829 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 containerd true true} ...
	I1123 08:41:26.589674  196829 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-180638 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-180638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:41:26.589739  196829 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:41:26.615213  196829 cni.go:84] Creating CNI manager for ""
	I1123 08:41:26.615295  196829 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:41:26.615324  196829 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:41:26.615377  196829 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-180638 NodeName:old-k8s-version-180638 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:41:26.615549  196829 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-180638"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:41:26.615640  196829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1123 08:41:26.623537  196829 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:41:26.623635  196829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:41:26.631295  196829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1123 08:41:26.643882  196829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:41:26.657243  196829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1123 08:41:26.669640  196829 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:41:26.673282  196829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:41:26.685864  196829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:41:26.794513  196829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:41:26.810973  196829 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638 for IP: 192.168.76.2
	I1123 08:41:26.811039  196829 certs.go:195] generating shared ca certs ...
	I1123 08:41:26.811080  196829 certs.go:227] acquiring lock for ca certs: {Name:mke0fc62f41acbef5eb3e84af3a3b8f9858bd1fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:41:26.811250  196829 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-2339/.minikube/ca.key
	I1123 08:41:26.811333  196829 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.key
	I1123 08:41:26.811355  196829 certs.go:257] generating profile certs ...
	I1123 08:41:26.811440  196829 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/client.key
	I1123 08:41:26.811477  196829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/client.crt with IP's: []
	I1123 08:41:26.973605  196829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/client.crt ...
	I1123 08:41:26.973639  196829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/client.crt: {Name:mke32e0874274fa8086c901b1e6afbf9faff17cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:41:26.973836  196829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/client.key ...
	I1123 08:41:26.973854  196829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/client.key: {Name:mk164b3f8143768da540cf1b000f576503ef0774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:41:26.974478  196829 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.key.28528907
	I1123 08:41:26.974505  196829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.crt.28528907 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 08:41:27.162797  196829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.crt.28528907 ...
	I1123 08:41:27.162827  196829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.crt.28528907: {Name:mk89f25fc4240f5ec0b53706cf7a05d65ec41dcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:41:27.163533  196829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.key.28528907 ...
	I1123 08:41:27.163550  196829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.key.28528907: {Name:mkceae69a15be6eedc78c0f192aa68e5077c2c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:41:27.164156  196829 certs.go:382] copying /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.crt.28528907 -> /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.crt
	I1123 08:41:27.164252  196829 certs.go:386] copying /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.key.28528907 -> /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.key
	I1123 08:41:27.164317  196829 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/proxy-client.key
	I1123 08:41:27.164337  196829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/proxy-client.crt with IP's: []
	I1123 08:41:27.589335  196829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/proxy-client.crt ...
	I1123 08:41:27.589366  196829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/proxy-client.crt: {Name:mk5e88fa47e7c5af72b6e967a38cd87e0cc58d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:41:27.590109  196829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/proxy-client.key ...
	I1123 08:41:27.590126  196829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/proxy-client.key: {Name:mka6f06ef565fc329562ab2f39faf7c67e598a55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:41:27.590847  196829 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151.pem (1338 bytes)
	W1123 08:41:27.590897  196829 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151_empty.pem, impossibly tiny 0 bytes
	I1123 08:41:27.590910  196829 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 08:41:27.590954  196829 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:41:27.590984  196829 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:41:27.591012  196829 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem (1675 bytes)
	I1123 08:41:27.591064  196829 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem (1708 bytes)
	I1123 08:41:27.591653  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:41:27.611397  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:41:27.628655  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:41:27.646428  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:41:27.663648  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1123 08:41:27.680373  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:41:27.697528  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:41:27.718625  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:41:27.735969  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:41:27.753670  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151.pem --> /usr/share/ca-certificates/4151.pem (1338 bytes)
	I1123 08:41:27.772203  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem --> /usr/share/ca-certificates/41512.pem (1708 bytes)
	I1123 08:41:27.790388  196829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:41:27.803782  196829 ssh_runner.go:195] Run: openssl version
	I1123 08:41:27.810231  196829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4151.pem && ln -fs /usr/share/ca-certificates/4151.pem /etc/ssl/certs/4151.pem"
	I1123 08:41:27.818398  196829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4151.pem
	I1123 08:41:27.822235  196829 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:02 /usr/share/ca-certificates/4151.pem
	I1123 08:41:27.822298  196829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4151.pem
	I1123 08:41:27.864039  196829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4151.pem /etc/ssl/certs/51391683.0"
	I1123 08:41:27.872287  196829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41512.pem && ln -fs /usr/share/ca-certificates/41512.pem /etc/ssl/certs/41512.pem"
	I1123 08:41:27.880642  196829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41512.pem
	I1123 08:41:27.884373  196829 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:02 /usr/share/ca-certificates/41512.pem
	I1123 08:41:27.884446  196829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41512.pem
	I1123 08:41:27.925706  196829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41512.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:41:27.933986  196829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:41:27.942212  196829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:41:27.945912  196829 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:41:27.945995  196829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:41:27.987134  196829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:41:27.995374  196829 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:41:27.999559  196829 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:41:27.999640  196829 kubeadm.go:401] StartCluster: {Name:old-k8s-version-180638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-180638 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:41:27.999724  196829 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:41:27.999901  196829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:41:28.030022  196829 cri.go:89] found id: ""
	I1123 08:41:28.030090  196829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:41:28.038618  196829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:41:28.046519  196829 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:41:28.046606  196829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:41:28.054666  196829 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:41:28.054688  196829 kubeadm.go:158] found existing configuration files:
	
	I1123 08:41:28.054763  196829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:41:28.062722  196829 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:41:28.062824  196829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:41:28.070543  196829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:41:28.078377  196829 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:41:28.078469  196829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:41:28.085999  196829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:41:28.093970  196829 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:41:28.094044  196829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:41:28.101534  196829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:41:28.109634  196829 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:41:28.109755  196829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:41:28.117144  196829 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:41:28.212901  196829 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 08:41:28.307897  196829 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:41:46.723355  196829 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1123 08:41:46.723418  196829 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:41:46.723506  196829 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:41:46.723561  196829 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:41:46.723595  196829 kubeadm.go:319] OS: Linux
	I1123 08:41:46.723640  196829 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:41:46.723688  196829 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:41:46.723735  196829 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:41:46.723783  196829 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:41:46.723830  196829 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:41:46.723879  196829 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:41:46.723925  196829 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:41:46.723972  196829 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:41:46.724018  196829 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:41:46.724090  196829 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:41:46.724184  196829 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:41:46.724277  196829 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1123 08:41:46.724339  196829 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:41:46.730394  196829 out.go:252]   - Generating certificates and keys ...
	I1123 08:41:46.730493  196829 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:41:46.730559  196829 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:41:46.730625  196829 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:41:46.730681  196829 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:41:46.730740  196829 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:41:46.730789  196829 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:41:46.730843  196829 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:41:46.730979  196829 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-180638] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:41:46.731033  196829 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:41:46.731156  196829 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-180638] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:41:46.731221  196829 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:41:46.731283  196829 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:41:46.731327  196829 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:41:46.731382  196829 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:41:46.731432  196829 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:41:46.731487  196829 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:41:46.731552  196829 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:41:46.731606  196829 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:41:46.731687  196829 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:41:46.732404  196829 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:41:46.735499  196829 out.go:252]   - Booting up control plane ...
	I1123 08:41:46.735693  196829 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:41:46.735790  196829 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:41:46.735869  196829 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:41:46.735991  196829 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:41:46.736083  196829 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:41:46.736124  196829 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:41:46.736298  196829 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1123 08:41:46.736379  196829 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.016975 seconds
	I1123 08:41:46.736508  196829 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:41:46.736649  196829 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:41:46.736716  196829 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:41:46.737049  196829 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-180638 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:41:46.737114  196829 kubeadm.go:319] [bootstrap-token] Using token: 89uxh1.yt288j2wm2p51h2c
	I1123 08:41:46.740440  196829 out.go:252]   - Configuring RBAC rules ...
	I1123 08:41:46.740562  196829 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:41:46.740658  196829 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:41:46.740805  196829 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:41:46.740950  196829 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:41:46.741070  196829 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:41:46.741162  196829 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:41:46.741276  196829 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:41:46.741318  196829 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:41:46.741363  196829 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:41:46.741369  196829 kubeadm.go:319] 
	I1123 08:41:46.741466  196829 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:41:46.741471  196829 kubeadm.go:319] 
	I1123 08:41:46.741547  196829 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:41:46.741551  196829 kubeadm.go:319] 
	I1123 08:41:46.741575  196829 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:41:46.741639  196829 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:41:46.741693  196829 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:41:46.741696  196829 kubeadm.go:319] 
	I1123 08:41:46.741757  196829 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:41:46.741761  196829 kubeadm.go:319] 
	I1123 08:41:46.741808  196829 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:41:46.741811  196829 kubeadm.go:319] 
	I1123 08:41:46.741868  196829 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:41:46.741944  196829 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:41:46.742020  196829 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:41:46.742024  196829 kubeadm.go:319] 
	I1123 08:41:46.742111  196829 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:41:46.742188  196829 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:41:46.742192  196829 kubeadm.go:319] 
	I1123 08:41:46.742277  196829 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 89uxh1.yt288j2wm2p51h2c \
	I1123 08:41:46.742380  196829 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4f35f48b47910e0f0424b1b0ace7d03cfc1e6ef5b162b679e98eef4f3a64a5a5 \
	I1123 08:41:46.742400  196829 kubeadm.go:319] 	--control-plane 
	I1123 08:41:46.742404  196829 kubeadm.go:319] 
	I1123 08:41:46.742493  196829 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:41:46.742497  196829 kubeadm.go:319] 
	I1123 08:41:46.742578  196829 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 89uxh1.yt288j2wm2p51h2c \
	I1123 08:41:46.742696  196829 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4f35f48b47910e0f0424b1b0ace7d03cfc1e6ef5b162b679e98eef4f3a64a5a5 
	I1123 08:41:46.742705  196829 cni.go:84] Creating CNI manager for ""
	I1123 08:41:46.742712  196829 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:41:46.747905  196829 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:41:46.750796  196829 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:41:46.761561  196829 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1123 08:41:46.761582  196829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:41:46.780526  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:41:47.782764  196829 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.002206277s)
	I1123 08:41:47.782810  196829 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:41:47.782925  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:47.783012  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-180638 minikube.k8s.io/updated_at=2025_11_23T08_41_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=old-k8s-version-180638 minikube.k8s.io/primary=true
	I1123 08:41:47.996747  196829 ops.go:34] apiserver oom_adj: -16
	I1123 08:41:47.996865  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:48.497263  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:48.997587  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:49.497238  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:49.996982  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:50.497817  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:50.996983  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:51.497681  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:51.997616  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:52.497659  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:52.997821  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:53.497324  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:53.997887  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:54.496981  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:54.996975  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:55.496982  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:55.997716  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:56.497689  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:56.997844  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:57.497606  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:57.997246  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:58.497272  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:58.997225  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:59.497615  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:59.996938  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:42:00.373202  196829 kubeadm.go:1114] duration metric: took 12.590316137s to wait for elevateKubeSystemPrivileges
	I1123 08:42:00.373235  196829 kubeadm.go:403] duration metric: took 32.37359943s to StartCluster
	I1123 08:42:00.373254  196829 settings.go:142] acquiring lock: {Name:mkfb77243b31dfe604b438e7da3f1bce2ba7b5a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:42:00.373329  196829 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:42:00.374576  196829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/kubeconfig: {Name:mka042f83263da2d190b70c2277735bf705fab5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:42:00.374865  196829 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:42:00.375126  196829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:42:00.375440  196829 config.go:182] Loaded profile config "old-k8s-version-180638": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:42:00.375497  196829 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:42:00.375560  196829 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-180638"
	I1123 08:42:00.375575  196829 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-180638"
	I1123 08:42:00.375597  196829 host.go:66] Checking if "old-k8s-version-180638" exists ...
	I1123 08:42:00.375813  196829 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-180638"
	I1123 08:42:00.375848  196829 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-180638"
	I1123 08:42:00.376308  196829 cli_runner.go:164] Run: docker container inspect old-k8s-version-180638 --format={{.State.Status}}
	I1123 08:42:00.376539  196829 cli_runner.go:164] Run: docker container inspect old-k8s-version-180638 --format={{.State.Status}}
	I1123 08:42:00.379011  196829 out.go:179] * Verifying Kubernetes components...
	I1123 08:42:00.382111  196829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:42:00.428496  196829 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-180638"
	I1123 08:42:00.428566  196829 host.go:66] Checking if "old-k8s-version-180638" exists ...
	I1123 08:42:00.429356  196829 cli_runner.go:164] Run: docker container inspect old-k8s-version-180638 --format={{.State.Status}}
	I1123 08:42:00.444047  196829 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:42:00.448509  196829 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:42:00.448558  196829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:42:00.448647  196829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180638
	I1123 08:42:00.472475  196829 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:42:00.472504  196829 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:42:00.472636  196829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180638
	I1123 08:42:00.490205  196829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/old-k8s-version-180638/id_rsa Username:docker}
	I1123 08:42:00.514193  196829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/old-k8s-version-180638/id_rsa Username:docker}
	I1123 08:42:00.878161  196829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:42:00.878301  196829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:42:00.916437  196829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:42:01.023971  196829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:42:01.723716  196829 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 08:42:01.726193  196829 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-180638" to be "Ready" ...
	I1123 08:42:02.171067  196829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.147020479s)
	I1123 08:42:02.174415  196829 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1123 08:42:02.177439  196829 addons.go:530] duration metric: took 1.801906906s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 08:42:02.232613  196829 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-180638" context rescaled to 1 replicas
	W1123 08:42:03.730244  196829 node_ready.go:57] node "old-k8s-version-180638" has "Ready":"False" status (will retry)
	W1123 08:42:06.235867  196829 node_ready.go:57] node "old-k8s-version-180638" has "Ready":"False" status (will retry)
	W1123 08:42:08.729375  196829 node_ready.go:57] node "old-k8s-version-180638" has "Ready":"False" status (will retry)
	W1123 08:42:10.729575  196829 node_ready.go:57] node "old-k8s-version-180638" has "Ready":"False" status (will retry)
	W1123 08:42:12.729904  196829 node_ready.go:57] node "old-k8s-version-180638" has "Ready":"False" status (will retry)
	I1123 08:42:13.730112  196829 node_ready.go:49] node "old-k8s-version-180638" is "Ready"
	I1123 08:42:13.730141  196829 node_ready.go:38] duration metric: took 12.003828725s for node "old-k8s-version-180638" to be "Ready" ...
	I1123 08:42:13.730157  196829 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:42:13.730215  196829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:42:13.742876  196829 api_server.go:72] duration metric: took 13.367936978s to wait for apiserver process to appear ...
	I1123 08:42:13.742904  196829 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:42:13.742928  196829 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:42:13.752538  196829 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 08:42:13.753958  196829 api_server.go:141] control plane version: v1.28.0
	I1123 08:42:13.753984  196829 api_server.go:131] duration metric: took 11.072911ms to wait for apiserver health ...
	I1123 08:42:13.753994  196829 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:42:13.757334  196829 system_pods.go:59] 8 kube-system pods found
	I1123 08:42:13.757377  196829 system_pods.go:61] "coredns-5dd5756b68-q4lbv" [9a14996d-e910-4a4f-a6f6-f2d8565a4b9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:42:13.757384  196829 system_pods.go:61] "etcd-old-k8s-version-180638" [d7e82a35-eda7-493b-8f80-319fff10e0a8] Running
	I1123 08:42:13.757390  196829 system_pods.go:61] "kindnet-mrfgl" [53d90f3f-687b-45a0-a344-321a75f38a20] Running
	I1123 08:42:13.757394  196829 system_pods.go:61] "kube-apiserver-old-k8s-version-180638" [6d727a9f-96a5-47f1-8676-3463c38e31e8] Running
	I1123 08:42:13.757398  196829 system_pods.go:61] "kube-controller-manager-old-k8s-version-180638" [92875b86-8bd3-4b30-acdd-2c65db14c97e] Running
	I1123 08:42:13.757402  196829 system_pods.go:61] "kube-proxy-dk6g5" [27bc489f-26f8-4848-9df2-6530dcad7423] Running
	I1123 08:42:13.757449  196829 system_pods.go:61] "kube-scheduler-old-k8s-version-180638" [76e55a3f-6b02-43c4-ae79-01300e9dd2c6] Running
	I1123 08:42:13.757461  196829 system_pods.go:61] "storage-provisioner" [fa923b06-d896-468f-8e82-51b4e9df88dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:42:13.757470  196829 system_pods.go:74] duration metric: took 3.469421ms to wait for pod list to return data ...
	I1123 08:42:13.757483  196829 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:42:13.759772  196829 default_sa.go:45] found service account: "default"
	I1123 08:42:13.759795  196829 default_sa.go:55] duration metric: took 2.306419ms for default service account to be created ...
	I1123 08:42:13.759805  196829 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:42:13.764346  196829 system_pods.go:86] 8 kube-system pods found
	I1123 08:42:13.764381  196829 system_pods.go:89] "coredns-5dd5756b68-q4lbv" [9a14996d-e910-4a4f-a6f6-f2d8565a4b9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:42:13.764387  196829 system_pods.go:89] "etcd-old-k8s-version-180638" [d7e82a35-eda7-493b-8f80-319fff10e0a8] Running
	I1123 08:42:13.764393  196829 system_pods.go:89] "kindnet-mrfgl" [53d90f3f-687b-45a0-a344-321a75f38a20] Running
	I1123 08:42:13.764398  196829 system_pods.go:89] "kube-apiserver-old-k8s-version-180638" [6d727a9f-96a5-47f1-8676-3463c38e31e8] Running
	I1123 08:42:13.764402  196829 system_pods.go:89] "kube-controller-manager-old-k8s-version-180638" [92875b86-8bd3-4b30-acdd-2c65db14c97e] Running
	I1123 08:42:13.764426  196829 system_pods.go:89] "kube-proxy-dk6g5" [27bc489f-26f8-4848-9df2-6530dcad7423] Running
	I1123 08:42:13.764438  196829 system_pods.go:89] "kube-scheduler-old-k8s-version-180638" [76e55a3f-6b02-43c4-ae79-01300e9dd2c6] Running
	I1123 08:42:13.764445  196829 system_pods.go:89] "storage-provisioner" [fa923b06-d896-468f-8e82-51b4e9df88dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:42:13.764468  196829 retry.go:31] will retry after 231.795609ms: missing components: kube-dns
	I1123 08:42:14.002188  196829 system_pods.go:86] 8 kube-system pods found
	I1123 08:42:14.002226  196829 system_pods.go:89] "coredns-5dd5756b68-q4lbv" [9a14996d-e910-4a4f-a6f6-f2d8565a4b9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:42:14.002234  196829 system_pods.go:89] "etcd-old-k8s-version-180638" [d7e82a35-eda7-493b-8f80-319fff10e0a8] Running
	I1123 08:42:14.002241  196829 system_pods.go:89] "kindnet-mrfgl" [53d90f3f-687b-45a0-a344-321a75f38a20] Running
	I1123 08:42:14.002290  196829 system_pods.go:89] "kube-apiserver-old-k8s-version-180638" [6d727a9f-96a5-47f1-8676-3463c38e31e8] Running
	I1123 08:42:14.002297  196829 system_pods.go:89] "kube-controller-manager-old-k8s-version-180638" [92875b86-8bd3-4b30-acdd-2c65db14c97e] Running
	I1123 08:42:14.002309  196829 system_pods.go:89] "kube-proxy-dk6g5" [27bc489f-26f8-4848-9df2-6530dcad7423] Running
	I1123 08:42:14.002313  196829 system_pods.go:89] "kube-scheduler-old-k8s-version-180638" [76e55a3f-6b02-43c4-ae79-01300e9dd2c6] Running
	I1123 08:42:14.002319  196829 system_pods.go:89] "storage-provisioner" [fa923b06-d896-468f-8e82-51b4e9df88dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:42:14.002358  196829 retry.go:31] will retry after 309.541133ms: missing components: kube-dns
	I1123 08:42:14.316329  196829 system_pods.go:86] 8 kube-system pods found
	I1123 08:42:14.316371  196829 system_pods.go:89] "coredns-5dd5756b68-q4lbv" [9a14996d-e910-4a4f-a6f6-f2d8565a4b9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:42:14.316378  196829 system_pods.go:89] "etcd-old-k8s-version-180638" [d7e82a35-eda7-493b-8f80-319fff10e0a8] Running
	I1123 08:42:14.316410  196829 system_pods.go:89] "kindnet-mrfgl" [53d90f3f-687b-45a0-a344-321a75f38a20] Running
	I1123 08:42:14.316416  196829 system_pods.go:89] "kube-apiserver-old-k8s-version-180638" [6d727a9f-96a5-47f1-8676-3463c38e31e8] Running
	I1123 08:42:14.316420  196829 system_pods.go:89] "kube-controller-manager-old-k8s-version-180638" [92875b86-8bd3-4b30-acdd-2c65db14c97e] Running
	I1123 08:42:14.316425  196829 system_pods.go:89] "kube-proxy-dk6g5" [27bc489f-26f8-4848-9df2-6530dcad7423] Running
	I1123 08:42:14.316453  196829 system_pods.go:89] "kube-scheduler-old-k8s-version-180638" [76e55a3f-6b02-43c4-ae79-01300e9dd2c6] Running
	I1123 08:42:14.316462  196829 system_pods.go:89] "storage-provisioner" [fa923b06-d896-468f-8e82-51b4e9df88dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:42:14.316487  196829 retry.go:31] will retry after 469.87728ms: missing components: kube-dns
	I1123 08:42:14.791058  196829 system_pods.go:86] 8 kube-system pods found
	I1123 08:42:14.791093  196829 system_pods.go:89] "coredns-5dd5756b68-q4lbv" [9a14996d-e910-4a4f-a6f6-f2d8565a4b9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:42:14.791100  196829 system_pods.go:89] "etcd-old-k8s-version-180638" [d7e82a35-eda7-493b-8f80-319fff10e0a8] Running
	I1123 08:42:14.791106  196829 system_pods.go:89] "kindnet-mrfgl" [53d90f3f-687b-45a0-a344-321a75f38a20] Running
	I1123 08:42:14.791110  196829 system_pods.go:89] "kube-apiserver-old-k8s-version-180638" [6d727a9f-96a5-47f1-8676-3463c38e31e8] Running
	I1123 08:42:14.791115  196829 system_pods.go:89] "kube-controller-manager-old-k8s-version-180638" [92875b86-8bd3-4b30-acdd-2c65db14c97e] Running
	I1123 08:42:14.791119  196829 system_pods.go:89] "kube-proxy-dk6g5" [27bc489f-26f8-4848-9df2-6530dcad7423] Running
	I1123 08:42:14.791123  196829 system_pods.go:89] "kube-scheduler-old-k8s-version-180638" [76e55a3f-6b02-43c4-ae79-01300e9dd2c6] Running
	I1123 08:42:14.791129  196829 system_pods.go:89] "storage-provisioner" [fa923b06-d896-468f-8e82-51b4e9df88dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:42:14.791144  196829 retry.go:31] will retry after 367.579223ms: missing components: kube-dns
	I1123 08:42:15.163345  196829 system_pods.go:86] 8 kube-system pods found
	I1123 08:42:15.163377  196829 system_pods.go:89] "coredns-5dd5756b68-q4lbv" [9a14996d-e910-4a4f-a6f6-f2d8565a4b9c] Running
	I1123 08:42:15.163384  196829 system_pods.go:89] "etcd-old-k8s-version-180638" [d7e82a35-eda7-493b-8f80-319fff10e0a8] Running
	I1123 08:42:15.163388  196829 system_pods.go:89] "kindnet-mrfgl" [53d90f3f-687b-45a0-a344-321a75f38a20] Running
	I1123 08:42:15.163393  196829 system_pods.go:89] "kube-apiserver-old-k8s-version-180638" [6d727a9f-96a5-47f1-8676-3463c38e31e8] Running
	I1123 08:42:15.163398  196829 system_pods.go:89] "kube-controller-manager-old-k8s-version-180638" [92875b86-8bd3-4b30-acdd-2c65db14c97e] Running
	I1123 08:42:15.163401  196829 system_pods.go:89] "kube-proxy-dk6g5" [27bc489f-26f8-4848-9df2-6530dcad7423] Running
	I1123 08:42:15.163405  196829 system_pods.go:89] "kube-scheduler-old-k8s-version-180638" [76e55a3f-6b02-43c4-ae79-01300e9dd2c6] Running
	I1123 08:42:15.163409  196829 system_pods.go:89] "storage-provisioner" [fa923b06-d896-468f-8e82-51b4e9df88dc] Running
	I1123 08:42:15.163417  196829 system_pods.go:126] duration metric: took 1.403606184s to wait for k8s-apps to be running ...
	I1123 08:42:15.163424  196829 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:42:15.163481  196829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:42:15.176644  196829 system_svc.go:56] duration metric: took 13.210368ms WaitForService to wait for kubelet
	I1123 08:42:15.176674  196829 kubeadm.go:587] duration metric: took 14.80173902s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:42:15.176693  196829 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:42:15.179781  196829 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:42:15.179818  196829 node_conditions.go:123] node cpu capacity is 2
	I1123 08:42:15.179832  196829 node_conditions.go:105] duration metric: took 3.134393ms to run NodePressure ...
	I1123 08:42:15.179843  196829 start.go:242] waiting for startup goroutines ...
	I1123 08:42:15.179851  196829 start.go:247] waiting for cluster config update ...
	I1123 08:42:15.179867  196829 start.go:256] writing updated cluster config ...
	I1123 08:42:15.180158  196829 ssh_runner.go:195] Run: rm -f paused
	I1123 08:42:15.184124  196829 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:42:15.188984  196829 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-q4lbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:15.197388  196829 pod_ready.go:94] pod "coredns-5dd5756b68-q4lbv" is "Ready"
	I1123 08:42:15.197483  196829 pod_ready.go:86] duration metric: took 8.468594ms for pod "coredns-5dd5756b68-q4lbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:15.200541  196829 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-180638" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:15.205348  196829 pod_ready.go:94] pod "etcd-old-k8s-version-180638" is "Ready"
	I1123 08:42:15.205396  196829 pod_ready.go:86] duration metric: took 4.809714ms for pod "etcd-old-k8s-version-180638" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:15.208274  196829 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-180638" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:15.213022  196829 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-180638" is "Ready"
	I1123 08:42:15.213049  196829 pod_ready.go:86] duration metric: took 4.746468ms for pod "kube-apiserver-old-k8s-version-180638" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:15.216062  196829 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-180638" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:15.588621  196829 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-180638" is "Ready"
	I1123 08:42:15.588649  196829 pod_ready.go:86] duration metric: took 372.560174ms for pod "kube-controller-manager-old-k8s-version-180638" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:15.789577  196829 pod_ready.go:83] waiting for pod "kube-proxy-dk6g5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:16.188996  196829 pod_ready.go:94] pod "kube-proxy-dk6g5" is "Ready"
	I1123 08:42:16.189025  196829 pod_ready.go:86] duration metric: took 399.418985ms for pod "kube-proxy-dk6g5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:16.388950  196829 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-180638" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:16.788322  196829 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-180638" is "Ready"
	I1123 08:42:16.788348  196829 pod_ready.go:86] duration metric: took 399.371796ms for pod "kube-scheduler-old-k8s-version-180638" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:16.788362  196829 pod_ready.go:40] duration metric: took 1.604205013s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:42:16.845637  196829 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1123 08:42:16.848524  196829 out.go:203] 
	W1123 08:42:16.851133  196829 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 08:42:16.854166  196829 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 08:42:16.857768  196829 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-180638" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	91bc48b43ecc6       1611cd07b61d5       7 seconds ago       Running             busybox                   0                   e4ae249cb52e3       busybox                                          default
	d28eb2e2ce196       ba04bb24b9575       13 seconds ago      Running             storage-provisioner       0                   a34410332e173       storage-provisioner                              kube-system
	7c2ec14edc41a       97e04611ad434       13 seconds ago      Running             coredns                   0                   c2b32ac0a3158       coredns-5dd5756b68-q4lbv                         kube-system
	75439fed83684       b1a8c6f707935       24 seconds ago      Running             kindnet-cni               0                   4f646733919cf       kindnet-mrfgl                                    kube-system
	a92786aea3fde       940f54a5bcae9       26 seconds ago      Running             kube-proxy                0                   304e17d801222       kube-proxy-dk6g5                                 kube-system
	dd592fa780598       9cdd6470f48c8       47 seconds ago      Running             etcd                      0                   6c8aefe95a6ce       etcd-old-k8s-version-180638                      kube-system
	9b79849edeb76       00543d2fe5d71       47 seconds ago      Running             kube-apiserver            0                   53e8e5479de81       kube-apiserver-old-k8s-version-180638            kube-system
	3a3a4da63be8b       46cc66ccc7c19       47 seconds ago      Running             kube-controller-manager   0                   79bfc44a51fa1       kube-controller-manager-old-k8s-version-180638   kube-system
	81034c6fa713b       762dce4090c5f       47 seconds ago      Running             kube-scheduler            0                   13b0850cbdf71       kube-scheduler-old-k8s-version-180638            kube-system
	
	
	==> containerd <==
	Nov 23 08:42:14 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:14.182847695Z" level=info msg="CreateContainer within sandbox \"c2b32ac0a3158e5b8e88a60e8ec54f99f67326e1aba5a91b8ead5c4893516fa1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c2ec14edc41a4bad3c997ddbc366a57e75740ffe5d4d804776570d3bbf4089a\""
	Nov 23 08:42:14 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:14.185890280Z" level=info msg="StartContainer for \"7c2ec14edc41a4bad3c997ddbc366a57e75740ffe5d4d804776570d3bbf4089a\""
	Nov 23 08:42:14 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:14.188348487Z" level=info msg="connecting to shim 7c2ec14edc41a4bad3c997ddbc366a57e75740ffe5d4d804776570d3bbf4089a" address="unix:///run/containerd/s/23e8ed77d6d2545cc040cf10e94b9aa1307cea730c4e678c5b1ba5d216eb3aae" protocol=ttrpc version=3
	Nov 23 08:42:14 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:14.191389545Z" level=info msg="CreateContainer within sandbox \"a34410332e1739898fe28b96e52dd9c87f97e3c9bb7b1ffd7f9865c04fcab2a8\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"d28eb2e2ce19649f4947ef6afbf30d211d2dfa34551b90f0c10c58fdb65b63cd\""
	Nov 23 08:42:14 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:14.193542238Z" level=info msg="StartContainer for \"d28eb2e2ce19649f4947ef6afbf30d211d2dfa34551b90f0c10c58fdb65b63cd\""
	Nov 23 08:42:14 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:14.194431932Z" level=info msg="connecting to shim d28eb2e2ce19649f4947ef6afbf30d211d2dfa34551b90f0c10c58fdb65b63cd" address="unix:///run/containerd/s/21496efb2be254d32b19cec40d0f9ba01ff31efa61fb387b7a12652ab6551c66" protocol=ttrpc version=3
	Nov 23 08:42:14 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:14.282934097Z" level=info msg="StartContainer for \"7c2ec14edc41a4bad3c997ddbc366a57e75740ffe5d4d804776570d3bbf4089a\" returns successfully"
	Nov 23 08:42:14 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:14.297466583Z" level=info msg="StartContainer for \"d28eb2e2ce19649f4947ef6afbf30d211d2dfa34551b90f0c10c58fdb65b63cd\" returns successfully"
	Nov 23 08:42:17 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:17.425584929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:54457203-a4b0-4bfe-b7e6-9804ec70353f,Namespace:default,Attempt:0,}"
	Nov 23 08:42:17 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:17.487893445Z" level=info msg="connecting to shim e4ae249cb52e36b4d0a2f9b31e40d2ac6f561a86c7f5020966174ef8dddb28bd" address="unix:///run/containerd/s/48de018ba0d9ba174f3818878e132ec8a301b930403195fb51f42bfd7ba5e6a1" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:42:17 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:17.543159907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:54457203-a4b0-4bfe-b7e6-9804ec70353f,Namespace:default,Attempt:0,} returns sandbox id \"e4ae249cb52e36b4d0a2f9b31e40d2ac6f561a86c7f5020966174ef8dddb28bd\""
	Nov 23 08:42:17 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:17.545984305Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.686274001Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.688431814Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937188"
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.690941320Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.694195944Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.694856179Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.148586355s"
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.694995922Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.698429970Z" level=info msg="CreateContainer within sandbox \"e4ae249cb52e36b4d0a2f9b31e40d2ac6f561a86c7f5020966174ef8dddb28bd\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.711352120Z" level=info msg="Container 91bc48b43ecc67ffc1bc3f7fdbc911d26a4116b41c47fa062edb6c0dda1555ed: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.724026742Z" level=info msg="CreateContainer within sandbox \"e4ae249cb52e36b4d0a2f9b31e40d2ac6f561a86c7f5020966174ef8dddb28bd\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"91bc48b43ecc67ffc1bc3f7fdbc911d26a4116b41c47fa062edb6c0dda1555ed\""
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.724804821Z" level=info msg="StartContainer for \"91bc48b43ecc67ffc1bc3f7fdbc911d26a4116b41c47fa062edb6c0dda1555ed\""
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.725703310Z" level=info msg="connecting to shim 91bc48b43ecc67ffc1bc3f7fdbc911d26a4116b41c47fa062edb6c0dda1555ed" address="unix:///run/containerd/s/48de018ba0d9ba174f3818878e132ec8a301b930403195fb51f42bfd7ba5e6a1" protocol=ttrpc version=3
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.798131338Z" level=info msg="StartContainer for \"91bc48b43ecc67ffc1bc3f7fdbc911d26a4116b41c47fa062edb6c0dda1555ed\" returns successfully"
	Nov 23 08:42:26 old-k8s-version-180638 containerd[758]: E1123 08:42:26.310347     758 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [7c2ec14edc41a4bad3c997ddbc366a57e75740ffe5d4d804776570d3bbf4089a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55800 - 59523 "HINFO IN 7767641017076382384.181717569997239392. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011046589s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-180638
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-180638
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=old-k8s-version-180638
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_41_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:41:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-180638
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:42:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:42:17 +0000   Sun, 23 Nov 2025 08:41:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:42:17 +0000   Sun, 23 Nov 2025 08:41:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:42:17 +0000   Sun, 23 Nov 2025 08:41:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:42:17 +0000   Sun, 23 Nov 2025 08:42:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-180638
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                66eb206b-bbaa-475d-8a79-ca34c9a5fe12
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-q4lbv                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-old-k8s-version-180638                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-mrfgl                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-180638             250m (12%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-180638    200m (10%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-dk6g5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-180638             100m (5%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 41s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s   kubelet          Node old-k8s-version-180638 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s   kubelet          Node old-k8s-version-180638 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s   kubelet          Node old-k8s-version-180638 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  40s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-180638 event: Registered Node old-k8s-version-180638 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-180638 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015154] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.511595] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034200] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753844] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.833249] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:37] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/22/fs': -2
	
	
	==> etcd [dd592fa780598a368949db2030306613299c5b0608cf477fbac364062431cf64] <==
	{"level":"info","ts":"2025-11-23T08:41:39.991972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-23T08:41:39.99787Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-23T08:41:40.001646Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T08:41:40.001836Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T08:41:40.002083Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T08:41:40.005961Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T08:41:40.00623Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T08:41:40.257458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-23T08:41:40.25751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-23T08:41:40.257539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-23T08:41:40.257552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-23T08:41:40.257563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-23T08:41:40.257574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-23T08:41:40.257585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-23T08:41:40.268883Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-180638 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T08:41:40.268928Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:41:40.269997Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T08:41:40.270072Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:41:40.279065Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:41:40.280196Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-23T08:41:40.28074Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T08:41:40.280878Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T08:41:40.285959Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:41:40.2918Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:41:40.291928Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 08:42:27 up  1:24,  0 user,  load average: 2.87, 3.94, 3.11
	Linux old-k8s-version-180638 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [75439fed83684fc39ca1dda64cef2644f6e3027bddbd15dff08e7923652250de] <==
	I1123 08:42:03.267849       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:42:03.357718       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 08:42:03.358291       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:42:03.358311       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:42:03.358351       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:42:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:42:03.558800       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:42:03.558877       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:42:03.558907       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:42:03.559938       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:42:03.759127       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:42:03.759212       1 metrics.go:72] Registering metrics
	I1123 08:42:03.759307       1 controller.go:711] "Syncing nftables rules"
	I1123 08:42:13.563082       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:42:13.563142       1 main.go:301] handling current node
	I1123 08:42:23.558928       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:42:23.558968       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9b79849edeb76ebe3d1f35f60331849eb478148607f83d7e7cc04f6a89d49cef] <==
	I1123 08:41:43.257160       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:41:43.264540       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 08:41:43.264801       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 08:41:43.264982       1 aggregator.go:166] initial CRD sync complete...
	I1123 08:41:43.265064       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 08:41:43.265148       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:41:43.265235       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:41:43.265825       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1123 08:41:43.266130       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1123 08:41:43.266262       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1123 08:41:44.072890       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:41:44.080525       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:41:44.080644       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:41:44.750567       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:41:44.801229       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:41:44.903834       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:41:44.910704       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 08:41:44.911792       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 08:41:44.916638       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:41:45.103928       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 08:41:46.604806       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 08:41:46.621587       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:41:46.632358       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1123 08:41:59.838596       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1123 08:41:59.989364       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3a3a4da63be8b591cb08202b3fb1a9b242f87a54811f462f72de264b9c1b565d] <==
	I1123 08:41:59.234069       1 shared_informer.go:318] Caches are synced for cronjob
	I1123 08:41:59.238552       1 shared_informer.go:318] Caches are synced for disruption
	I1123 08:41:59.287773       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 08:41:59.635666       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:41:59.635720       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 08:41:59.643988       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:41:59.852836       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mrfgl"
	I1123 08:41:59.861554       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-dk6g5"
	I1123 08:41:59.994988       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1123 08:42:00.250351       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-q4lbv"
	I1123 08:42:00.296011       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-j889m"
	I1123 08:42:00.327262       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="332.952552ms"
	I1123 08:42:00.350757       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.440283ms"
	I1123 08:42:00.350887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.187µs"
	I1123 08:42:01.812294       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1123 08:42:01.846504       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-j889m"
	I1123 08:42:01.870499       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.780918ms"
	I1123 08:42:01.879884       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.337438ms"
	I1123 08:42:01.882600       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.828µs"
	I1123 08:42:13.666896       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="126.5µs"
	I1123 08:42:13.681803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.242µs"
	I1123 08:42:14.082595       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1123 08:42:14.904458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="120.896µs"
	I1123 08:42:14.939466       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.592228ms"
	I1123 08:42:14.940407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.694µs"
	
	
	==> kube-proxy [a92786aea3fde1301dec08d36ed3b9e913c480310fa0d744d9a1cf2c70d26621] <==
	I1123 08:42:00.964044       1 server_others.go:69] "Using iptables proxy"
	I1123 08:42:01.013279       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1123 08:42:01.120318       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:42:01.122189       1 server_others.go:152] "Using iptables Proxier"
	I1123 08:42:01.122231       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 08:42:01.122239       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 08:42:01.122283       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 08:42:01.122555       1 server.go:846] "Version info" version="v1.28.0"
	I1123 08:42:01.122986       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:42:01.123668       1 config.go:188] "Starting service config controller"
	I1123 08:42:01.123741       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 08:42:01.123783       1 config.go:97] "Starting endpoint slice config controller"
	I1123 08:42:01.123795       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 08:42:01.124686       1 config.go:315] "Starting node config controller"
	I1123 08:42:01.124705       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 08:42:01.224043       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1123 08:42:01.224109       1 shared_informer.go:318] Caches are synced for service config
	I1123 08:42:01.225482       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [81034c6fa713b6148ba16d2f50c50ea8e020311ed53ed7d84f6606a76362fc4f] <==
	W1123 08:41:43.659650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1123 08:41:43.659675       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1123 08:41:43.666035       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1123 08:41:43.666076       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1123 08:41:43.666131       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1123 08:41:43.666152       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1123 08:41:43.666282       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1123 08:41:43.666305       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1123 08:41:43.666372       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1123 08:41:43.666389       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1123 08:41:43.666441       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1123 08:41:43.666456       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1123 08:41:43.666515       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1123 08:41:43.666530       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1123 08:41:43.666580       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1123 08:41:43.666596       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1123 08:41:43.666648       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1123 08:41:43.666663       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1123 08:41:43.666710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1123 08:41:43.666725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1123 08:41:44.500880       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1123 08:41:44.500926       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1123 08:41:44.538573       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1123 08:41:44.538617       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1123 08:41:45.053879       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.155079    1561 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.155695    1561 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.867270    1561 topology_manager.go:215] "Topology Admit Handler" podUID="53d90f3f-687b-45a0-a344-321a75f38a20" podNamespace="kube-system" podName="kindnet-mrfgl"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.882839    1561 topology_manager.go:215] "Topology Admit Handler" podUID="27bc489f-26f8-4848-9df2-6530dcad7423" podNamespace="kube-system" podName="kube-proxy-dk6g5"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.909264    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53d90f3f-687b-45a0-a344-321a75f38a20-xtables-lock\") pod \"kindnet-mrfgl\" (UID: \"53d90f3f-687b-45a0-a344-321a75f38a20\") " pod="kube-system/kindnet-mrfgl"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.909324    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k598w\" (UniqueName: \"kubernetes.io/projected/53d90f3f-687b-45a0-a344-321a75f38a20-kube-api-access-k598w\") pod \"kindnet-mrfgl\" (UID: \"53d90f3f-687b-45a0-a344-321a75f38a20\") " pod="kube-system/kindnet-mrfgl"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.909349    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/27bc489f-26f8-4848-9df2-6530dcad7423-kube-proxy\") pod \"kube-proxy-dk6g5\" (UID: \"27bc489f-26f8-4848-9df2-6530dcad7423\") " pod="kube-system/kube-proxy-dk6g5"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.909373    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27bc489f-26f8-4848-9df2-6530dcad7423-xtables-lock\") pod \"kube-proxy-dk6g5\" (UID: \"27bc489f-26f8-4848-9df2-6530dcad7423\") " pod="kube-system/kube-proxy-dk6g5"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.909397    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27bc489f-26f8-4848-9df2-6530dcad7423-lib-modules\") pod \"kube-proxy-dk6g5\" (UID: \"27bc489f-26f8-4848-9df2-6530dcad7423\") " pod="kube-system/kube-proxy-dk6g5"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.909438    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/53d90f3f-687b-45a0-a344-321a75f38a20-cni-cfg\") pod \"kindnet-mrfgl\" (UID: \"53d90f3f-687b-45a0-a344-321a75f38a20\") " pod="kube-system/kindnet-mrfgl"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.909462    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53d90f3f-687b-45a0-a344-321a75f38a20-lib-modules\") pod \"kindnet-mrfgl\" (UID: \"53d90f3f-687b-45a0-a344-321a75f38a20\") " pod="kube-system/kindnet-mrfgl"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.909488    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djzpr\" (UniqueName: \"kubernetes.io/projected/27bc489f-26f8-4848-9df2-6530dcad7423-kube-api-access-djzpr\") pod \"kube-proxy-dk6g5\" (UID: \"27bc489f-26f8-4848-9df2-6530dcad7423\") " pod="kube-system/kube-proxy-dk6g5"
	Nov 23 08:42:00 old-k8s-version-180638 kubelet[1561]: I1123 08:42:00.902067    1561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dk6g5" podStartSLOduration=1.902024352 podCreationTimestamp="2025-11-23 08:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:42:00.901629237 +0000 UTC m=+14.326420754" watchObservedRunningTime="2025-11-23 08:42:00.902024352 +0000 UTC m=+14.326815869"
	Nov 23 08:42:06 old-k8s-version-180638 kubelet[1561]: I1123 08:42:06.732067    1561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-mrfgl" podStartSLOduration=5.495030012 podCreationTimestamp="2025-11-23 08:41:59 +0000 UTC" firstStartedPulling="2025-11-23 08:42:00.776285773 +0000 UTC m=+14.201077291" lastFinishedPulling="2025-11-23 08:42:03.013276696 +0000 UTC m=+16.438068214" observedRunningTime="2025-11-23 08:42:03.871246056 +0000 UTC m=+17.296037582" watchObservedRunningTime="2025-11-23 08:42:06.732020935 +0000 UTC m=+20.156812461"
	Nov 23 08:42:13 old-k8s-version-180638 kubelet[1561]: I1123 08:42:13.622916    1561 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 23 08:42:13 old-k8s-version-180638 kubelet[1561]: I1123 08:42:13.659700    1561 topology_manager.go:215] "Topology Admit Handler" podUID="9a14996d-e910-4a4f-a6f6-f2d8565a4b9c" podNamespace="kube-system" podName="coredns-5dd5756b68-q4lbv"
	Nov 23 08:42:13 old-k8s-version-180638 kubelet[1561]: I1123 08:42:13.662325    1561 topology_manager.go:215] "Topology Admit Handler" podUID="fa923b06-d896-468f-8e82-51b4e9df88dc" podNamespace="kube-system" podName="storage-provisioner"
	Nov 23 08:42:13 old-k8s-version-180638 kubelet[1561]: I1123 08:42:13.844230    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbqlk\" (UniqueName: \"kubernetes.io/projected/9a14996d-e910-4a4f-a6f6-f2d8565a4b9c-kube-api-access-cbqlk\") pod \"coredns-5dd5756b68-q4lbv\" (UID: \"9a14996d-e910-4a4f-a6f6-f2d8565a4b9c\") " pod="kube-system/coredns-5dd5756b68-q4lbv"
	Nov 23 08:42:13 old-k8s-version-180638 kubelet[1561]: I1123 08:42:13.844293    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsph6\" (UniqueName: \"kubernetes.io/projected/fa923b06-d896-468f-8e82-51b4e9df88dc-kube-api-access-wsph6\") pod \"storage-provisioner\" (UID: \"fa923b06-d896-468f-8e82-51b4e9df88dc\") " pod="kube-system/storage-provisioner"
	Nov 23 08:42:13 old-k8s-version-180638 kubelet[1561]: I1123 08:42:13.844319    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a14996d-e910-4a4f-a6f6-f2d8565a4b9c-config-volume\") pod \"coredns-5dd5756b68-q4lbv\" (UID: \"9a14996d-e910-4a4f-a6f6-f2d8565a4b9c\") " pod="kube-system/coredns-5dd5756b68-q4lbv"
	Nov 23 08:42:13 old-k8s-version-180638 kubelet[1561]: I1123 08:42:13.844356    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fa923b06-d896-468f-8e82-51b4e9df88dc-tmp\") pod \"storage-provisioner\" (UID: \"fa923b06-d896-468f-8e82-51b4e9df88dc\") " pod="kube-system/storage-provisioner"
	Nov 23 08:42:14 old-k8s-version-180638 kubelet[1561]: I1123 08:42:14.902124    1561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-q4lbv" podStartSLOduration=14.902081859 podCreationTimestamp="2025-11-23 08:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:42:14.90156744 +0000 UTC m=+28.326358958" watchObservedRunningTime="2025-11-23 08:42:14.902081859 +0000 UTC m=+28.326873377"
	Nov 23 08:42:17 old-k8s-version-180638 kubelet[1561]: I1123 08:42:17.115903    1561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.115773659 podCreationTimestamp="2025-11-23 08:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:42:14.969166121 +0000 UTC m=+28.393957647" watchObservedRunningTime="2025-11-23 08:42:17.115773659 +0000 UTC m=+30.540565177"
	Nov 23 08:42:17 old-k8s-version-180638 kubelet[1561]: I1123 08:42:17.116261    1561 topology_manager.go:215] "Topology Admit Handler" podUID="54457203-a4b0-4bfe-b7e6-9804ec70353f" podNamespace="default" podName="busybox"
	Nov 23 08:42:17 old-k8s-version-180638 kubelet[1561]: I1123 08:42:17.162701    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rd6c\" (UniqueName: \"kubernetes.io/projected/54457203-a4b0-4bfe-b7e6-9804ec70353f-kube-api-access-5rd6c\") pod \"busybox\" (UID: \"54457203-a4b0-4bfe-b7e6-9804ec70353f\") " pod="default/busybox"
	
	
	==> storage-provisioner [d28eb2e2ce19649f4947ef6afbf30d211d2dfa34551b90f0c10c58fdb65b63cd] <==
	I1123 08:42:14.310989       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:42:14.328878       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:42:14.329183       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 08:42:14.342255       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:42:14.342845       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46657fa0-d0c5-44e7-b4c5-6303b10aff5f", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-180638_72561ecd-5ccf-4007-bbe6-862fc9539cb1 became leader
	I1123 08:42:14.342920       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-180638_72561ecd-5ccf-4007-bbe6-862fc9539cb1!
	I1123 08:42:14.443997       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-180638_72561ecd-5ccf-4007-bbe6-862fc9539cb1!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-180638 -n old-k8s-version-180638
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-180638 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-180638
helpers_test.go:243: (dbg) docker inspect old-k8s-version-180638:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3fb449072f419f1d1ff9eebb56f96c76cc24ab8ceb8213db71616f0ddddcbb9f",
	        "Created": "2025-11-23T08:41:19.865592877Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 197224,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:41:19.943635138Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/3fb449072f419f1d1ff9eebb56f96c76cc24ab8ceb8213db71616f0ddddcbb9f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3fb449072f419f1d1ff9eebb56f96c76cc24ab8ceb8213db71616f0ddddcbb9f/hostname",
	        "HostsPath": "/var/lib/docker/containers/3fb449072f419f1d1ff9eebb56f96c76cc24ab8ceb8213db71616f0ddddcbb9f/hosts",
	        "LogPath": "/var/lib/docker/containers/3fb449072f419f1d1ff9eebb56f96c76cc24ab8ceb8213db71616f0ddddcbb9f/3fb449072f419f1d1ff9eebb56f96c76cc24ab8ceb8213db71616f0ddddcbb9f-json.log",
	        "Name": "/old-k8s-version-180638",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-180638:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-180638",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3fb449072f419f1d1ff9eebb56f96c76cc24ab8ceb8213db71616f0ddddcbb9f",
	                "LowerDir": "/var/lib/docker/overlay2/3a0f954d6f7082ad577dca92fa6658b1e327bb820ce9a801d55d584f14165f01-init/diff:/var/lib/docker/overlay2/88c30082a717909d357f7d81c88a05ce3487a40d372ee6dc57fb9f012e0502da/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a0f954d6f7082ad577dca92fa6658b1e327bb820ce9a801d55d584f14165f01/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a0f954d6f7082ad577dca92fa6658b1e327bb820ce9a801d55d584f14165f01/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a0f954d6f7082ad577dca92fa6658b1e327bb820ce9a801d55d584f14165f01/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-180638",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-180638/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-180638",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-180638",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-180638",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c7b0d9b425062d52a0c8052c45b2a62780ff3f6f2620c50e9e88251d56098ed9",
	            "SandboxKey": "/var/run/docker/netns/c7b0d9b42506",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-180638": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:cc:5c:df:67:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ec0f96b222364b6472248735ae9433b2f33bdeaa152953368412a68215eb42c4",
	                    "EndpointID": "20998764ba69f988f94705bb48be4dc33edbb29c350250a4be2539cea69e130e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-180638",
	                        "3fb449072f41"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-180638 -n old-k8s-version-180638
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-180638 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-180638 logs -n 25: (1.238926736s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-440243 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                   │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo docker system info                                                                                                                                                                                                            │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo containerd config dump                                                                                                                                                                                                        │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo crio config                                                                                                                                                                                                                   │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ delete  │ -p cilium-440243                                                                                                                                                                                                                                    │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │ 23 Nov 25 08:39 UTC │
	│ start   │ -p cert-expiration-119748 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-119748   │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │ 23 Nov 25 08:40 UTC │
	│ ssh     │ force-systemd-env-760522 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-760522 │ jenkins │ v1.37.0 │ 23 Nov 25 08:40 UTC │ 23 Nov 25 08:40 UTC │
	│ delete  │ -p force-systemd-env-760522                                                                                                                                                                                                                         │ force-systemd-env-760522 │ jenkins │ v1.37.0 │ 23 Nov 25 08:40 UTC │ 23 Nov 25 08:40 UTC │
	│ start   │ -p cert-options-106536 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:40 UTC │ 23 Nov 25 08:41 UTC │
	│ ssh     │ cert-options-106536 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ ssh     │ -p cert-options-106536 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ delete  │ -p cert-options-106536                                                                                                                                                                                                                              │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ start   │ -p old-k8s-version-180638 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:42 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:41:13
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:41:13.503798  196829 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:41:13.504001  196829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:41:13.504037  196829 out.go:374] Setting ErrFile to fd 2...
	I1123 08:41:13.504057  196829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:41:13.504449  196829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
	I1123 08:41:13.504989  196829 out.go:368] Setting JSON to false
	I1123 08:41:13.507307  196829 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5022,"bootTime":1763882251,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 08:41:13.507402  196829 start.go:143] virtualization:  
	I1123 08:41:13.511220  196829 out.go:179] * [old-k8s-version-180638] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:41:13.515732  196829 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:41:13.516085  196829 notify.go:221] Checking for updates...
	I1123 08:41:13.523195  196829 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:41:13.526521  196829 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:41:13.529705  196829 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube
	I1123 08:41:13.532894  196829 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:41:13.536018  196829 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:41:13.539629  196829 config.go:182] Loaded profile config "cert-expiration-119748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:41:13.539739  196829 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:41:13.574366  196829 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:41:13.574516  196829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:41:13.638032  196829 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:41:13.62864309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:41:13.638135  196829 docker.go:319] overlay module found
	I1123 08:41:13.643635  196829 out.go:179] * Using the docker driver based on user configuration
	I1123 08:41:13.646835  196829 start.go:309] selected driver: docker
	I1123 08:41:13.646859  196829 start.go:927] validating driver "docker" against <nil>
	I1123 08:41:13.646879  196829 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:41:13.647612  196829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:41:13.702166  196829 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:41:13.693228668 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:41:13.702317  196829 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:41:13.702534  196829 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:41:13.705700  196829 out.go:179] * Using Docker driver with root privileges
	I1123 08:41:13.708681  196829 cni.go:84] Creating CNI manager for ""
	I1123 08:41:13.708750  196829 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:41:13.708770  196829 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:41:13.708863  196829 start.go:353] cluster config:
	{Name:old-k8s-version-180638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-180638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:41:13.711891  196829 out.go:179] * Starting "old-k8s-version-180638" primary control-plane node in "old-k8s-version-180638" cluster
	I1123 08:41:13.714733  196829 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:41:13.717633  196829 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:41:13.720589  196829 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:41:13.720638  196829 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1123 08:41:13.720665  196829 cache.go:65] Caching tarball of preloaded images
	I1123 08:41:13.720676  196829 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:41:13.720783  196829 preload.go:238] Found /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1123 08:41:13.720794  196829 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1123 08:41:13.720923  196829 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/config.json ...
	I1123 08:41:13.720948  196829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/config.json: {Name:mk3fa6091d320fb60049f236674c350f36f8b1c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:41:13.740066  196829 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:41:13.740090  196829 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:41:13.740110  196829 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:41:13.740140  196829 start.go:360] acquireMachinesLock for old-k8s-version-180638: {Name:mk02adabcbe3b4194eb9b9cf13dfbc9bffd5d61a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:41:13.740251  196829 start.go:364] duration metric: took 92.325µs to acquireMachinesLock for "old-k8s-version-180638"
	I1123 08:41:13.740280  196829 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-180638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-180638 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:41:13.740345  196829 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:41:13.743708  196829 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:41:13.743928  196829 start.go:159] libmachine.API.Create for "old-k8s-version-180638" (driver="docker")
	I1123 08:41:13.743964  196829 client.go:173] LocalClient.Create starting
	I1123 08:41:13.744044  196829 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem
	I1123 08:41:13.744081  196829 main.go:143] libmachine: Decoding PEM data...
	I1123 08:41:13.744099  196829 main.go:143] libmachine: Parsing certificate...
	I1123 08:41:13.744156  196829 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem
	I1123 08:41:13.744179  196829 main.go:143] libmachine: Decoding PEM data...
	I1123 08:41:13.744191  196829 main.go:143] libmachine: Parsing certificate...
	I1123 08:41:13.744566  196829 cli_runner.go:164] Run: docker network inspect old-k8s-version-180638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:41:13.760425  196829 cli_runner.go:211] docker network inspect old-k8s-version-180638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:41:13.760511  196829 network_create.go:284] running [docker network inspect old-k8s-version-180638] to gather additional debugging logs...
	I1123 08:41:13.760531  196829 cli_runner.go:164] Run: docker network inspect old-k8s-version-180638
	W1123 08:41:13.775922  196829 cli_runner.go:211] docker network inspect old-k8s-version-180638 returned with exit code 1
	I1123 08:41:13.775955  196829 network_create.go:287] error running [docker network inspect old-k8s-version-180638]: docker network inspect old-k8s-version-180638: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-180638 not found
	I1123 08:41:13.775968  196829 network_create.go:289] output of [docker network inspect old-k8s-version-180638]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-180638 not found
	
	** /stderr **
	I1123 08:41:13.776076  196829 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:41:13.792199  196829 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a946cc9c0edf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:ea:52:17:a9:7a} reservation:<nil>}
	I1123 08:41:13.792559  196829 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fb33daef15c9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:08:1d:d1:c6:df} reservation:<nil>}
	I1123 08:41:13.792931  196829 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb61edac6088 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e6:64:59:e2:c3:5a} reservation:<nil>}
	I1123 08:41:13.793382  196829 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1e140}
	I1123 08:41:13.793443  196829 network_create.go:124] attempt to create docker network old-k8s-version-180638 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 08:41:13.793513  196829 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-180638 old-k8s-version-180638
	I1123 08:41:13.859515  196829 network_create.go:108] docker network old-k8s-version-180638 192.168.76.0/24 created
	I1123 08:41:13.859564  196829 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-180638" container
	I1123 08:41:13.859638  196829 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:41:13.877503  196829 cli_runner.go:164] Run: docker volume create old-k8s-version-180638 --label name.minikube.sigs.k8s.io=old-k8s-version-180638 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:41:13.898930  196829 oci.go:103] Successfully created a docker volume old-k8s-version-180638
	I1123 08:41:13.899032  196829 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-180638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-180638 --entrypoint /usr/bin/test -v old-k8s-version-180638:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:41:14.458747  196829 oci.go:107] Successfully prepared a docker volume old-k8s-version-180638
	I1123 08:41:14.458805  196829 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:41:14.458814  196829 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:41:14.458892  196829 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-180638:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 08:41:19.794152  196829 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-180638:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.335195842s)
	I1123 08:41:19.794189  196829 kic.go:203] duration metric: took 5.335371475s to extract preloaded images to volume ...
	W1123 08:41:19.794328  196829 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:41:19.794436  196829 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:41:19.848844  196829 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-180638 --name old-k8s-version-180638 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-180638 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-180638 --network old-k8s-version-180638 --ip 192.168.76.2 --volume old-k8s-version-180638:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:41:20.177907  196829 cli_runner.go:164] Run: docker container inspect old-k8s-version-180638 --format={{.State.Running}}
	I1123 08:41:20.204948  196829 cli_runner.go:164] Run: docker container inspect old-k8s-version-180638 --format={{.State.Status}}
	I1123 08:41:20.227539  196829 cli_runner.go:164] Run: docker exec old-k8s-version-180638 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:41:20.289856  196829 oci.go:144] the created container "old-k8s-version-180638" has a running status.
	I1123 08:41:20.289891  196829 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-2339/.minikube/machines/old-k8s-version-180638/id_rsa...
	I1123 08:41:20.448285  196829 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-2339/.minikube/machines/old-k8s-version-180638/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:41:20.475665  196829 cli_runner.go:164] Run: docker container inspect old-k8s-version-180638 --format={{.State.Status}}
	I1123 08:41:20.521617  196829 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:41:20.521635  196829 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-180638 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:41:20.589359  196829 cli_runner.go:164] Run: docker container inspect old-k8s-version-180638 --format={{.State.Status}}
	I1123 08:41:20.625639  196829 machine.go:94] provisionDockerMachine start ...
	I1123 08:41:20.625720  196829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180638
	I1123 08:41:20.654376  196829 main.go:143] libmachine: Using SSH client type: native
	I1123 08:41:20.655192  196829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1123 08:41:20.655341  196829 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:41:20.656290  196829 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 08:41:23.816940  196829 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-180638
	
	I1123 08:41:23.816964  196829 ubuntu.go:182] provisioning hostname "old-k8s-version-180638"
	I1123 08:41:23.817040  196829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180638
	I1123 08:41:23.833840  196829 main.go:143] libmachine: Using SSH client type: native
	I1123 08:41:23.834172  196829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1123 08:41:23.834187  196829 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-180638 && echo "old-k8s-version-180638" | sudo tee /etc/hostname
	I1123 08:41:23.999609  196829 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-180638
	
	I1123 08:41:23.999698  196829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180638
	I1123 08:41:24.020254  196829 main.go:143] libmachine: Using SSH client type: native
	I1123 08:41:24.020584  196829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1123 08:41:24.020601  196829 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-180638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-180638/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-180638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:41:24.185924  196829 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:41:24.185946  196829 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-2339/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-2339/.minikube}
	I1123 08:41:24.185967  196829 ubuntu.go:190] setting up certificates
	I1123 08:41:24.185976  196829 provision.go:84] configureAuth start
	I1123 08:41:24.186052  196829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-180638
	I1123 08:41:24.215320  196829 provision.go:143] copyHostCerts
	I1123 08:41:24.215378  196829 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem, removing ...
	I1123 08:41:24.215387  196829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem
	I1123 08:41:24.215451  196829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem (1078 bytes)
	I1123 08:41:24.215548  196829 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem, removing ...
	I1123 08:41:24.215553  196829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem
	I1123 08:41:24.215581  196829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem (1123 bytes)
	I1123 08:41:24.215633  196829 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem, removing ...
	I1123 08:41:24.215638  196829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem
	I1123 08:41:24.215661  196829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem (1675 bytes)
	I1123 08:41:24.216026  196829 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-180638 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-180638]
	I1123 08:41:24.624778  196829 provision.go:177] copyRemoteCerts
	I1123 08:41:24.624888  196829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:41:24.624959  196829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180638
	I1123 08:41:24.646886  196829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/old-k8s-version-180638/id_rsa Username:docker}
	I1123 08:41:24.753771  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:41:24.771993  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:41:24.790069  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1123 08:41:24.807496  196829 provision.go:87] duration metric: took 621.497153ms to configureAuth
	I1123 08:41:24.807563  196829 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:41:24.807769  196829 config.go:182] Loaded profile config "old-k8s-version-180638": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:41:24.807806  196829 machine.go:97] duration metric: took 4.182148274s to provisionDockerMachine
	I1123 08:41:24.807853  196829 client.go:176] duration metric: took 11.063877137s to LocalClient.Create
	I1123 08:41:24.807895  196829 start.go:167] duration metric: took 11.063966541s to libmachine.API.Create "old-k8s-version-180638"
	I1123 08:41:24.807925  196829 start.go:293] postStartSetup for "old-k8s-version-180638" (driver="docker")
	I1123 08:41:24.807964  196829 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:41:24.808042  196829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:41:24.808096  196829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180638
	I1123 08:41:24.825195  196829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/old-k8s-version-180638/id_rsa Username:docker}
	I1123 08:41:24.930003  196829 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:41:24.933389  196829 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:41:24.933440  196829 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:41:24.933453  196829 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-2339/.minikube/addons for local assets ...
	I1123 08:41:24.933516  196829 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-2339/.minikube/files for local assets ...
	I1123 08:41:24.933597  196829 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem -> 41512.pem in /etc/ssl/certs
	I1123 08:41:24.933700  196829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:41:24.941173  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem --> /etc/ssl/certs/41512.pem (1708 bytes)
	I1123 08:41:24.960763  196829 start.go:296] duration metric: took 152.794115ms for postStartSetup
	I1123 08:41:24.961139  196829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-180638
	I1123 08:41:24.978306  196829 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/config.json ...
	I1123 08:41:24.978587  196829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:41:24.978642  196829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180638
	I1123 08:41:24.994847  196829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/old-k8s-version-180638/id_rsa Username:docker}
	I1123 08:41:25.098792  196829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:41:25.103719  196829 start.go:128] duration metric: took 11.363355721s to createHost
	I1123 08:41:25.103745  196829 start.go:83] releasing machines lock for "old-k8s-version-180638", held for 11.363481187s
	I1123 08:41:25.103820  196829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-180638
	I1123 08:41:25.123598  196829 ssh_runner.go:195] Run: cat /version.json
	I1123 08:41:25.123615  196829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:41:25.123646  196829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180638
	I1123 08:41:25.123677  196829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180638
	I1123 08:41:25.149385  196829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/old-k8s-version-180638/id_rsa Username:docker}
	I1123 08:41:25.159257  196829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/old-k8s-version-180638/id_rsa Username:docker}
	I1123 08:41:25.253035  196829 ssh_runner.go:195] Run: systemctl --version
	I1123 08:41:25.348445  196829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:41:25.352830  196829 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:41:25.352933  196829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:41:25.381383  196829 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:41:25.381469  196829 start.go:496] detecting cgroup driver to use...
	I1123 08:41:25.381508  196829 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:41:25.381570  196829 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:41:25.397040  196829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:41:25.410260  196829 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:41:25.410362  196829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:41:25.428008  196829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:41:25.447082  196829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:41:25.620588  196829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:41:25.749588  196829 docker.go:234] disabling docker service ...
	I1123 08:41:25.749661  196829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:41:25.772076  196829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:41:25.784914  196829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:41:25.899082  196829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:41:26.009981  196829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:41:26.025315  196829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:41:26.039953  196829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1123 08:41:26.049471  196829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:41:26.059847  196829 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 08:41:26.060009  196829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 08:41:26.069667  196829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:41:26.079903  196829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:41:26.089816  196829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:41:26.099752  196829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:41:26.108060  196829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:41:26.117585  196829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:41:26.126366  196829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:41:26.135803  196829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:41:26.143649  196829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:41:26.151206  196829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:41:26.281475  196829 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:41:26.394263  196829 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:41:26.394379  196829 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:41:26.398397  196829 start.go:564] Will wait 60s for crictl version
	I1123 08:41:26.398525  196829 ssh_runner.go:195] Run: which crictl
	I1123 08:41:26.402050  196829 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:41:26.433447  196829 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:41:26.433548  196829 ssh_runner.go:195] Run: containerd --version
	I1123 08:41:26.456534  196829 ssh_runner.go:195] Run: containerd --version
	I1123 08:41:26.486458  196829 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1123 08:41:26.489565  196829 cli_runner.go:164] Run: docker network inspect old-k8s-version-180638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:41:26.507660  196829 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 08:41:26.511689  196829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:41:26.521591  196829 kubeadm.go:884] updating cluster {Name:old-k8s-version-180638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-180638 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:41:26.521716  196829 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:41:26.521782  196829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:41:26.552790  196829 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:41:26.552815  196829 containerd.go:534] Images already preloaded, skipping extraction
	I1123 08:41:26.552879  196829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:41:26.589503  196829 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:41:26.589526  196829 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:41:26.589533  196829 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 containerd true true} ...
	I1123 08:41:26.589674  196829 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-180638 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-180638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:41:26.589739  196829 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:41:26.615213  196829 cni.go:84] Creating CNI manager for ""
	I1123 08:41:26.615295  196829 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:41:26.615324  196829 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:41:26.615377  196829 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-180638 NodeName:old-k8s-version-180638 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:41:26.615549  196829 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-180638"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:41:26.615640  196829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1123 08:41:26.623537  196829 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:41:26.623635  196829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:41:26.631295  196829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1123 08:41:26.643882  196829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:41:26.657243  196829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1123 08:41:26.669640  196829 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:41:26.673282  196829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:41:26.685864  196829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:41:26.794513  196829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:41:26.810973  196829 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638 for IP: 192.168.76.2
	I1123 08:41:26.811039  196829 certs.go:195] generating shared ca certs ...
	I1123 08:41:26.811080  196829 certs.go:227] acquiring lock for ca certs: {Name:mke0fc62f41acbef5eb3e84af3a3b8f9858bd1fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:41:26.811250  196829 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-2339/.minikube/ca.key
	I1123 08:41:26.811333  196829 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.key
	I1123 08:41:26.811355  196829 certs.go:257] generating profile certs ...
	I1123 08:41:26.811440  196829 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/client.key
	I1123 08:41:26.811477  196829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/client.crt with IP's: []
	I1123 08:41:26.973605  196829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/client.crt ...
	I1123 08:41:26.973639  196829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/client.crt: {Name:mke32e0874274fa8086c901b1e6afbf9faff17cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:41:26.973836  196829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/client.key ...
	I1123 08:41:26.973854  196829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/client.key: {Name:mk164b3f8143768da540cf1b000f576503ef0774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:41:26.974478  196829 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.key.28528907
	I1123 08:41:26.974505  196829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.crt.28528907 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 08:41:27.162797  196829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.crt.28528907 ...
	I1123 08:41:27.162827  196829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.crt.28528907: {Name:mk89f25fc4240f5ec0b53706cf7a05d65ec41dcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:41:27.163533  196829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.key.28528907 ...
	I1123 08:41:27.163550  196829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.key.28528907: {Name:mkceae69a15be6eedc78c0f192aa68e5077c2c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:41:27.164156  196829 certs.go:382] copying /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.crt.28528907 -> /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.crt
	I1123 08:41:27.164252  196829 certs.go:386] copying /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.key.28528907 -> /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.key
	I1123 08:41:27.164317  196829 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/proxy-client.key
	I1123 08:41:27.164337  196829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/proxy-client.crt with IP's: []
	I1123 08:41:27.589335  196829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/proxy-client.crt ...
	I1123 08:41:27.589366  196829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/proxy-client.crt: {Name:mk5e88fa47e7c5af72b6e967a38cd87e0cc58d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:41:27.590109  196829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/proxy-client.key ...
	I1123 08:41:27.590126  196829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/proxy-client.key: {Name:mka6f06ef565fc329562ab2f39faf7c67e598a55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:41:27.590847  196829 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151.pem (1338 bytes)
	W1123 08:41:27.590897  196829 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151_empty.pem, impossibly tiny 0 bytes
	I1123 08:41:27.590910  196829 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 08:41:27.590954  196829 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:41:27.590984  196829 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:41:27.591012  196829 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem (1675 bytes)
	I1123 08:41:27.591064  196829 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem (1708 bytes)
	I1123 08:41:27.591653  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:41:27.611397  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:41:27.628655  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:41:27.646428  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:41:27.663648  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1123 08:41:27.680373  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:41:27.697528  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:41:27.718625  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:41:27.735969  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:41:27.753670  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151.pem --> /usr/share/ca-certificates/4151.pem (1338 bytes)
	I1123 08:41:27.772203  196829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem --> /usr/share/ca-certificates/41512.pem (1708 bytes)
	I1123 08:41:27.790388  196829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:41:27.803782  196829 ssh_runner.go:195] Run: openssl version
	I1123 08:41:27.810231  196829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4151.pem && ln -fs /usr/share/ca-certificates/4151.pem /etc/ssl/certs/4151.pem"
	I1123 08:41:27.818398  196829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4151.pem
	I1123 08:41:27.822235  196829 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:02 /usr/share/ca-certificates/4151.pem
	I1123 08:41:27.822298  196829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4151.pem
	I1123 08:41:27.864039  196829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4151.pem /etc/ssl/certs/51391683.0"
	I1123 08:41:27.872287  196829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41512.pem && ln -fs /usr/share/ca-certificates/41512.pem /etc/ssl/certs/41512.pem"
	I1123 08:41:27.880642  196829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41512.pem
	I1123 08:41:27.884373  196829 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:02 /usr/share/ca-certificates/41512.pem
	I1123 08:41:27.884446  196829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41512.pem
	I1123 08:41:27.925706  196829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41512.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:41:27.933986  196829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:41:27.942212  196829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:41:27.945912  196829 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:41:27.945995  196829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:41:27.987134  196829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:41:27.995374  196829 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:41:27.999559  196829 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:41:27.999640  196829 kubeadm.go:401] StartCluster: {Name:old-k8s-version-180638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-180638 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:41:27.999724  196829 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:41:27.999901  196829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:41:28.030022  196829 cri.go:89] found id: ""
	I1123 08:41:28.030090  196829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:41:28.038618  196829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:41:28.046519  196829 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:41:28.046606  196829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:41:28.054666  196829 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:41:28.054688  196829 kubeadm.go:158] found existing configuration files:
	
	I1123 08:41:28.054763  196829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:41:28.062722  196829 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:41:28.062824  196829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:41:28.070543  196829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:41:28.078377  196829 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:41:28.078469  196829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:41:28.085999  196829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:41:28.093970  196829 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:41:28.094044  196829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:41:28.101534  196829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:41:28.109634  196829 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:41:28.109755  196829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:41:28.117144  196829 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:41:28.212901  196829 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 08:41:28.307897  196829 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:41:46.723355  196829 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1123 08:41:46.723418  196829 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:41:46.723506  196829 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:41:46.723561  196829 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:41:46.723595  196829 kubeadm.go:319] OS: Linux
	I1123 08:41:46.723640  196829 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:41:46.723688  196829 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:41:46.723735  196829 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:41:46.723783  196829 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:41:46.723830  196829 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:41:46.723879  196829 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:41:46.723925  196829 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:41:46.723972  196829 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:41:46.724018  196829 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:41:46.724090  196829 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:41:46.724184  196829 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:41:46.724277  196829 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1123 08:41:46.724339  196829 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:41:46.730394  196829 out.go:252]   - Generating certificates and keys ...
	I1123 08:41:46.730493  196829 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:41:46.730559  196829 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:41:46.730625  196829 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:41:46.730681  196829 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:41:46.730740  196829 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:41:46.730789  196829 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:41:46.730843  196829 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:41:46.730979  196829 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-180638] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:41:46.731033  196829 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:41:46.731156  196829 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-180638] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:41:46.731221  196829 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:41:46.731283  196829 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:41:46.731327  196829 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:41:46.731382  196829 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:41:46.731432  196829 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:41:46.731487  196829 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:41:46.731552  196829 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:41:46.731606  196829 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:41:46.731687  196829 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:41:46.732404  196829 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:41:46.735499  196829 out.go:252]   - Booting up control plane ...
	I1123 08:41:46.735693  196829 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:41:46.735790  196829 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:41:46.735869  196829 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:41:46.735991  196829 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:41:46.736083  196829 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:41:46.736124  196829 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:41:46.736298  196829 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1123 08:41:46.736379  196829 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.016975 seconds
	I1123 08:41:46.736508  196829 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:41:46.736649  196829 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:41:46.736716  196829 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:41:46.737049  196829 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-180638 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:41:46.737114  196829 kubeadm.go:319] [bootstrap-token] Using token: 89uxh1.yt288j2wm2p51h2c
	I1123 08:41:46.740440  196829 out.go:252]   - Configuring RBAC rules ...
	I1123 08:41:46.740562  196829 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:41:46.740658  196829 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:41:46.740805  196829 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:41:46.740950  196829 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:41:46.741070  196829 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:41:46.741162  196829 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:41:46.741276  196829 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:41:46.741318  196829 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:41:46.741363  196829 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:41:46.741369  196829 kubeadm.go:319] 
	I1123 08:41:46.741466  196829 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:41:46.741471  196829 kubeadm.go:319] 
	I1123 08:41:46.741547  196829 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:41:46.741551  196829 kubeadm.go:319] 
	I1123 08:41:46.741575  196829 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:41:46.741639  196829 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:41:46.741693  196829 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:41:46.741696  196829 kubeadm.go:319] 
	I1123 08:41:46.741757  196829 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:41:46.741761  196829 kubeadm.go:319] 
	I1123 08:41:46.741808  196829 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:41:46.741811  196829 kubeadm.go:319] 
	I1123 08:41:46.741868  196829 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:41:46.741944  196829 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:41:46.742020  196829 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:41:46.742024  196829 kubeadm.go:319] 
	I1123 08:41:46.742111  196829 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:41:46.742188  196829 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:41:46.742192  196829 kubeadm.go:319] 
	I1123 08:41:46.742277  196829 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 89uxh1.yt288j2wm2p51h2c \
	I1123 08:41:46.742380  196829 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4f35f48b47910e0f0424b1b0ace7d03cfc1e6ef5b162b679e98eef4f3a64a5a5 \
	I1123 08:41:46.742400  196829 kubeadm.go:319] 	--control-plane 
	I1123 08:41:46.742404  196829 kubeadm.go:319] 
	I1123 08:41:46.742493  196829 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:41:46.742497  196829 kubeadm.go:319] 
	I1123 08:41:46.742578  196829 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 89uxh1.yt288j2wm2p51h2c \
	I1123 08:41:46.742696  196829 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4f35f48b47910e0f0424b1b0ace7d03cfc1e6ef5b162b679e98eef4f3a64a5a5 
	I1123 08:41:46.742705  196829 cni.go:84] Creating CNI manager for ""
	I1123 08:41:46.742712  196829 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:41:46.747905  196829 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:41:46.750796  196829 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:41:46.761561  196829 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1123 08:41:46.761582  196829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:41:46.780526  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:41:47.782764  196829 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.002206277s)
	I1123 08:41:47.782810  196829 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:41:47.782925  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:47.783012  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-180638 minikube.k8s.io/updated_at=2025_11_23T08_41_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=old-k8s-version-180638 minikube.k8s.io/primary=true
	I1123 08:41:47.996747  196829 ops.go:34] apiserver oom_adj: -16
	I1123 08:41:47.996865  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:48.497263  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:48.997587  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:49.497238  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:49.996982  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:50.497817  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:50.996983  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:51.497681  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:51.997616  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:52.497659  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:52.997821  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:53.497324  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:53.997887  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:54.496981  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:54.996975  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:55.496982  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:55.997716  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:56.497689  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:56.997844  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:57.497606  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:57.997246  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:58.497272  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:58.997225  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:59.497615  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:41:59.996938  196829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:42:00.373202  196829 kubeadm.go:1114] duration metric: took 12.590316137s to wait for elevateKubeSystemPrivileges
	I1123 08:42:00.373235  196829 kubeadm.go:403] duration metric: took 32.37359943s to StartCluster
	I1123 08:42:00.373254  196829 settings.go:142] acquiring lock: {Name:mkfb77243b31dfe604b438e7da3f1bce2ba7b5a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:42:00.373329  196829 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:42:00.374576  196829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/kubeconfig: {Name:mka042f83263da2d190b70c2277735bf705fab5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:42:00.374865  196829 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:42:00.375126  196829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:42:00.375440  196829 config.go:182] Loaded profile config "old-k8s-version-180638": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:42:00.375497  196829 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:42:00.375560  196829 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-180638"
	I1123 08:42:00.375575  196829 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-180638"
	I1123 08:42:00.375597  196829 host.go:66] Checking if "old-k8s-version-180638" exists ...
	I1123 08:42:00.375813  196829 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-180638"
	I1123 08:42:00.375848  196829 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-180638"
	I1123 08:42:00.376308  196829 cli_runner.go:164] Run: docker container inspect old-k8s-version-180638 --format={{.State.Status}}
	I1123 08:42:00.376539  196829 cli_runner.go:164] Run: docker container inspect old-k8s-version-180638 --format={{.State.Status}}
	I1123 08:42:00.379011  196829 out.go:179] * Verifying Kubernetes components...
	I1123 08:42:00.382111  196829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:42:00.428496  196829 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-180638"
	I1123 08:42:00.428566  196829 host.go:66] Checking if "old-k8s-version-180638" exists ...
	I1123 08:42:00.429356  196829 cli_runner.go:164] Run: docker container inspect old-k8s-version-180638 --format={{.State.Status}}
	I1123 08:42:00.444047  196829 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:42:00.448509  196829 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:42:00.448558  196829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:42:00.448647  196829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180638
	I1123 08:42:00.472475  196829 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:42:00.472504  196829 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:42:00.472636  196829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-180638
	I1123 08:42:00.490205  196829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/old-k8s-version-180638/id_rsa Username:docker}
	I1123 08:42:00.514193  196829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/old-k8s-version-180638/id_rsa Username:docker}
	I1123 08:42:00.878161  196829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:42:00.878301  196829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:42:00.916437  196829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:42:01.023971  196829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:42:01.723716  196829 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 08:42:01.726193  196829 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-180638" to be "Ready" ...
	I1123 08:42:02.171067  196829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.147020479s)
	I1123 08:42:02.174415  196829 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1123 08:42:02.177439  196829 addons.go:530] duration metric: took 1.801906906s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 08:42:02.232613  196829 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-180638" context rescaled to 1 replicas
	W1123 08:42:03.730244  196829 node_ready.go:57] node "old-k8s-version-180638" has "Ready":"False" status (will retry)
	W1123 08:42:06.235867  196829 node_ready.go:57] node "old-k8s-version-180638" has "Ready":"False" status (will retry)
	W1123 08:42:08.729375  196829 node_ready.go:57] node "old-k8s-version-180638" has "Ready":"False" status (will retry)
	W1123 08:42:10.729575  196829 node_ready.go:57] node "old-k8s-version-180638" has "Ready":"False" status (will retry)
	W1123 08:42:12.729904  196829 node_ready.go:57] node "old-k8s-version-180638" has "Ready":"False" status (will retry)
	I1123 08:42:13.730112  196829 node_ready.go:49] node "old-k8s-version-180638" is "Ready"
	I1123 08:42:13.730141  196829 node_ready.go:38] duration metric: took 12.003828725s for node "old-k8s-version-180638" to be "Ready" ...
	I1123 08:42:13.730157  196829 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:42:13.730215  196829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:42:13.742876  196829 api_server.go:72] duration metric: took 13.367936978s to wait for apiserver process to appear ...
	I1123 08:42:13.742904  196829 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:42:13.742928  196829 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:42:13.752538  196829 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 08:42:13.753958  196829 api_server.go:141] control plane version: v1.28.0
	I1123 08:42:13.753984  196829 api_server.go:131] duration metric: took 11.072911ms to wait for apiserver health ...
	I1123 08:42:13.753994  196829 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:42:13.757334  196829 system_pods.go:59] 8 kube-system pods found
	I1123 08:42:13.757377  196829 system_pods.go:61] "coredns-5dd5756b68-q4lbv" [9a14996d-e910-4a4f-a6f6-f2d8565a4b9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:42:13.757384  196829 system_pods.go:61] "etcd-old-k8s-version-180638" [d7e82a35-eda7-493b-8f80-319fff10e0a8] Running
	I1123 08:42:13.757390  196829 system_pods.go:61] "kindnet-mrfgl" [53d90f3f-687b-45a0-a344-321a75f38a20] Running
	I1123 08:42:13.757394  196829 system_pods.go:61] "kube-apiserver-old-k8s-version-180638" [6d727a9f-96a5-47f1-8676-3463c38e31e8] Running
	I1123 08:42:13.757398  196829 system_pods.go:61] "kube-controller-manager-old-k8s-version-180638" [92875b86-8bd3-4b30-acdd-2c65db14c97e] Running
	I1123 08:42:13.757402  196829 system_pods.go:61] "kube-proxy-dk6g5" [27bc489f-26f8-4848-9df2-6530dcad7423] Running
	I1123 08:42:13.757449  196829 system_pods.go:61] "kube-scheduler-old-k8s-version-180638" [76e55a3f-6b02-43c4-ae79-01300e9dd2c6] Running
	I1123 08:42:13.757461  196829 system_pods.go:61] "storage-provisioner" [fa923b06-d896-468f-8e82-51b4e9df88dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:42:13.757470  196829 system_pods.go:74] duration metric: took 3.469421ms to wait for pod list to return data ...
	I1123 08:42:13.757483  196829 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:42:13.759772  196829 default_sa.go:45] found service account: "default"
	I1123 08:42:13.759795  196829 default_sa.go:55] duration metric: took 2.306419ms for default service account to be created ...
	I1123 08:42:13.759805  196829 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:42:13.764346  196829 system_pods.go:86] 8 kube-system pods found
	I1123 08:42:13.764381  196829 system_pods.go:89] "coredns-5dd5756b68-q4lbv" [9a14996d-e910-4a4f-a6f6-f2d8565a4b9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:42:13.764387  196829 system_pods.go:89] "etcd-old-k8s-version-180638" [d7e82a35-eda7-493b-8f80-319fff10e0a8] Running
	I1123 08:42:13.764393  196829 system_pods.go:89] "kindnet-mrfgl" [53d90f3f-687b-45a0-a344-321a75f38a20] Running
	I1123 08:42:13.764398  196829 system_pods.go:89] "kube-apiserver-old-k8s-version-180638" [6d727a9f-96a5-47f1-8676-3463c38e31e8] Running
	I1123 08:42:13.764402  196829 system_pods.go:89] "kube-controller-manager-old-k8s-version-180638" [92875b86-8bd3-4b30-acdd-2c65db14c97e] Running
	I1123 08:42:13.764426  196829 system_pods.go:89] "kube-proxy-dk6g5" [27bc489f-26f8-4848-9df2-6530dcad7423] Running
	I1123 08:42:13.764438  196829 system_pods.go:89] "kube-scheduler-old-k8s-version-180638" [76e55a3f-6b02-43c4-ae79-01300e9dd2c6] Running
	I1123 08:42:13.764445  196829 system_pods.go:89] "storage-provisioner" [fa923b06-d896-468f-8e82-51b4e9df88dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:42:13.764468  196829 retry.go:31] will retry after 231.795609ms: missing components: kube-dns
	I1123 08:42:14.002188  196829 system_pods.go:86] 8 kube-system pods found
	I1123 08:42:14.002226  196829 system_pods.go:89] "coredns-5dd5756b68-q4lbv" [9a14996d-e910-4a4f-a6f6-f2d8565a4b9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:42:14.002234  196829 system_pods.go:89] "etcd-old-k8s-version-180638" [d7e82a35-eda7-493b-8f80-319fff10e0a8] Running
	I1123 08:42:14.002241  196829 system_pods.go:89] "kindnet-mrfgl" [53d90f3f-687b-45a0-a344-321a75f38a20] Running
	I1123 08:42:14.002290  196829 system_pods.go:89] "kube-apiserver-old-k8s-version-180638" [6d727a9f-96a5-47f1-8676-3463c38e31e8] Running
	I1123 08:42:14.002297  196829 system_pods.go:89] "kube-controller-manager-old-k8s-version-180638" [92875b86-8bd3-4b30-acdd-2c65db14c97e] Running
	I1123 08:42:14.002309  196829 system_pods.go:89] "kube-proxy-dk6g5" [27bc489f-26f8-4848-9df2-6530dcad7423] Running
	I1123 08:42:14.002313  196829 system_pods.go:89] "kube-scheduler-old-k8s-version-180638" [76e55a3f-6b02-43c4-ae79-01300e9dd2c6] Running
	I1123 08:42:14.002319  196829 system_pods.go:89] "storage-provisioner" [fa923b06-d896-468f-8e82-51b4e9df88dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:42:14.002358  196829 retry.go:31] will retry after 309.541133ms: missing components: kube-dns
	I1123 08:42:14.316329  196829 system_pods.go:86] 8 kube-system pods found
	I1123 08:42:14.316371  196829 system_pods.go:89] "coredns-5dd5756b68-q4lbv" [9a14996d-e910-4a4f-a6f6-f2d8565a4b9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:42:14.316378  196829 system_pods.go:89] "etcd-old-k8s-version-180638" [d7e82a35-eda7-493b-8f80-319fff10e0a8] Running
	I1123 08:42:14.316410  196829 system_pods.go:89] "kindnet-mrfgl" [53d90f3f-687b-45a0-a344-321a75f38a20] Running
	I1123 08:42:14.316416  196829 system_pods.go:89] "kube-apiserver-old-k8s-version-180638" [6d727a9f-96a5-47f1-8676-3463c38e31e8] Running
	I1123 08:42:14.316420  196829 system_pods.go:89] "kube-controller-manager-old-k8s-version-180638" [92875b86-8bd3-4b30-acdd-2c65db14c97e] Running
	I1123 08:42:14.316425  196829 system_pods.go:89] "kube-proxy-dk6g5" [27bc489f-26f8-4848-9df2-6530dcad7423] Running
	I1123 08:42:14.316453  196829 system_pods.go:89] "kube-scheduler-old-k8s-version-180638" [76e55a3f-6b02-43c4-ae79-01300e9dd2c6] Running
	I1123 08:42:14.316462  196829 system_pods.go:89] "storage-provisioner" [fa923b06-d896-468f-8e82-51b4e9df88dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:42:14.316487  196829 retry.go:31] will retry after 469.87728ms: missing components: kube-dns
	I1123 08:42:14.791058  196829 system_pods.go:86] 8 kube-system pods found
	I1123 08:42:14.791093  196829 system_pods.go:89] "coredns-5dd5756b68-q4lbv" [9a14996d-e910-4a4f-a6f6-f2d8565a4b9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:42:14.791100  196829 system_pods.go:89] "etcd-old-k8s-version-180638" [d7e82a35-eda7-493b-8f80-319fff10e0a8] Running
	I1123 08:42:14.791106  196829 system_pods.go:89] "kindnet-mrfgl" [53d90f3f-687b-45a0-a344-321a75f38a20] Running
	I1123 08:42:14.791110  196829 system_pods.go:89] "kube-apiserver-old-k8s-version-180638" [6d727a9f-96a5-47f1-8676-3463c38e31e8] Running
	I1123 08:42:14.791115  196829 system_pods.go:89] "kube-controller-manager-old-k8s-version-180638" [92875b86-8bd3-4b30-acdd-2c65db14c97e] Running
	I1123 08:42:14.791119  196829 system_pods.go:89] "kube-proxy-dk6g5" [27bc489f-26f8-4848-9df2-6530dcad7423] Running
	I1123 08:42:14.791123  196829 system_pods.go:89] "kube-scheduler-old-k8s-version-180638" [76e55a3f-6b02-43c4-ae79-01300e9dd2c6] Running
	I1123 08:42:14.791129  196829 system_pods.go:89] "storage-provisioner" [fa923b06-d896-468f-8e82-51b4e9df88dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:42:14.791144  196829 retry.go:31] will retry after 367.579223ms: missing components: kube-dns
	I1123 08:42:15.163345  196829 system_pods.go:86] 8 kube-system pods found
	I1123 08:42:15.163377  196829 system_pods.go:89] "coredns-5dd5756b68-q4lbv" [9a14996d-e910-4a4f-a6f6-f2d8565a4b9c] Running
	I1123 08:42:15.163384  196829 system_pods.go:89] "etcd-old-k8s-version-180638" [d7e82a35-eda7-493b-8f80-319fff10e0a8] Running
	I1123 08:42:15.163388  196829 system_pods.go:89] "kindnet-mrfgl" [53d90f3f-687b-45a0-a344-321a75f38a20] Running
	I1123 08:42:15.163393  196829 system_pods.go:89] "kube-apiserver-old-k8s-version-180638" [6d727a9f-96a5-47f1-8676-3463c38e31e8] Running
	I1123 08:42:15.163398  196829 system_pods.go:89] "kube-controller-manager-old-k8s-version-180638" [92875b86-8bd3-4b30-acdd-2c65db14c97e] Running
	I1123 08:42:15.163401  196829 system_pods.go:89] "kube-proxy-dk6g5" [27bc489f-26f8-4848-9df2-6530dcad7423] Running
	I1123 08:42:15.163405  196829 system_pods.go:89] "kube-scheduler-old-k8s-version-180638" [76e55a3f-6b02-43c4-ae79-01300e9dd2c6] Running
	I1123 08:42:15.163409  196829 system_pods.go:89] "storage-provisioner" [fa923b06-d896-468f-8e82-51b4e9df88dc] Running
	I1123 08:42:15.163417  196829 system_pods.go:126] duration metric: took 1.403606184s to wait for k8s-apps to be running ...
	I1123 08:42:15.163424  196829 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:42:15.163481  196829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:42:15.176644  196829 system_svc.go:56] duration metric: took 13.210368ms WaitForService to wait for kubelet
	I1123 08:42:15.176674  196829 kubeadm.go:587] duration metric: took 14.80173902s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:42:15.176693  196829 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:42:15.179781  196829 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:42:15.179818  196829 node_conditions.go:123] node cpu capacity is 2
	I1123 08:42:15.179832  196829 node_conditions.go:105] duration metric: took 3.134393ms to run NodePressure ...
	I1123 08:42:15.179843  196829 start.go:242] waiting for startup goroutines ...
	I1123 08:42:15.179851  196829 start.go:247] waiting for cluster config update ...
	I1123 08:42:15.179867  196829 start.go:256] writing updated cluster config ...
	I1123 08:42:15.180158  196829 ssh_runner.go:195] Run: rm -f paused
	I1123 08:42:15.184124  196829 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:42:15.188984  196829 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-q4lbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:15.197388  196829 pod_ready.go:94] pod "coredns-5dd5756b68-q4lbv" is "Ready"
	I1123 08:42:15.197483  196829 pod_ready.go:86] duration metric: took 8.468594ms for pod "coredns-5dd5756b68-q4lbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:15.200541  196829 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-180638" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:15.205348  196829 pod_ready.go:94] pod "etcd-old-k8s-version-180638" is "Ready"
	I1123 08:42:15.205396  196829 pod_ready.go:86] duration metric: took 4.809714ms for pod "etcd-old-k8s-version-180638" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:15.208274  196829 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-180638" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:15.213022  196829 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-180638" is "Ready"
	I1123 08:42:15.213049  196829 pod_ready.go:86] duration metric: took 4.746468ms for pod "kube-apiserver-old-k8s-version-180638" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:15.216062  196829 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-180638" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:15.588621  196829 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-180638" is "Ready"
	I1123 08:42:15.588649  196829 pod_ready.go:86] duration metric: took 372.560174ms for pod "kube-controller-manager-old-k8s-version-180638" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:15.789577  196829 pod_ready.go:83] waiting for pod "kube-proxy-dk6g5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:16.188996  196829 pod_ready.go:94] pod "kube-proxy-dk6g5" is "Ready"
	I1123 08:42:16.189025  196829 pod_ready.go:86] duration metric: took 399.418985ms for pod "kube-proxy-dk6g5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:16.388950  196829 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-180638" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:16.788322  196829 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-180638" is "Ready"
	I1123 08:42:16.788348  196829 pod_ready.go:86] duration metric: took 399.371796ms for pod "kube-scheduler-old-k8s-version-180638" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:42:16.788362  196829 pod_ready.go:40] duration metric: took 1.604205013s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:42:16.845637  196829 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1123 08:42:16.848524  196829 out.go:203] 
	W1123 08:42:16.851133  196829 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 08:42:16.854166  196829 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 08:42:16.857768  196829 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-180638" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	91bc48b43ecc6       1611cd07b61d5       9 seconds ago       Running             busybox                   0                   e4ae249cb52e3       busybox                                          default
	d28eb2e2ce196       ba04bb24b9575       15 seconds ago      Running             storage-provisioner       0                   a34410332e173       storage-provisioner                              kube-system
	7c2ec14edc41a       97e04611ad434       15 seconds ago      Running             coredns                   0                   c2b32ac0a3158       coredns-5dd5756b68-q4lbv                         kube-system
	75439fed83684       b1a8c6f707935       26 seconds ago      Running             kindnet-cni               0                   4f646733919cf       kindnet-mrfgl                                    kube-system
	a92786aea3fde       940f54a5bcae9       29 seconds ago      Running             kube-proxy                0                   304e17d801222       kube-proxy-dk6g5                                 kube-system
	dd592fa780598       9cdd6470f48c8       50 seconds ago      Running             etcd                      0                   6c8aefe95a6ce       etcd-old-k8s-version-180638                      kube-system
	9b79849edeb76       00543d2fe5d71       50 seconds ago      Running             kube-apiserver            0                   53e8e5479de81       kube-apiserver-old-k8s-version-180638            kube-system
	3a3a4da63be8b       46cc66ccc7c19       50 seconds ago      Running             kube-controller-manager   0                   79bfc44a51fa1       kube-controller-manager-old-k8s-version-180638   kube-system
	81034c6fa713b       762dce4090c5f       50 seconds ago      Running             kube-scheduler            0                   13b0850cbdf71       kube-scheduler-old-k8s-version-180638            kube-system
	
	
	==> containerd <==
	Nov 23 08:42:14 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:14.182847695Z" level=info msg="CreateContainer within sandbox \"c2b32ac0a3158e5b8e88a60e8ec54f99f67326e1aba5a91b8ead5c4893516fa1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c2ec14edc41a4bad3c997ddbc366a57e75740ffe5d4d804776570d3bbf4089a\""
	Nov 23 08:42:14 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:14.185890280Z" level=info msg="StartContainer for \"7c2ec14edc41a4bad3c997ddbc366a57e75740ffe5d4d804776570d3bbf4089a\""
	Nov 23 08:42:14 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:14.188348487Z" level=info msg="connecting to shim 7c2ec14edc41a4bad3c997ddbc366a57e75740ffe5d4d804776570d3bbf4089a" address="unix:///run/containerd/s/23e8ed77d6d2545cc040cf10e94b9aa1307cea730c4e678c5b1ba5d216eb3aae" protocol=ttrpc version=3
	Nov 23 08:42:14 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:14.191389545Z" level=info msg="CreateContainer within sandbox \"a34410332e1739898fe28b96e52dd9c87f97e3c9bb7b1ffd7f9865c04fcab2a8\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"d28eb2e2ce19649f4947ef6afbf30d211d2dfa34551b90f0c10c58fdb65b63cd\""
	Nov 23 08:42:14 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:14.193542238Z" level=info msg="StartContainer for \"d28eb2e2ce19649f4947ef6afbf30d211d2dfa34551b90f0c10c58fdb65b63cd\""
	Nov 23 08:42:14 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:14.194431932Z" level=info msg="connecting to shim d28eb2e2ce19649f4947ef6afbf30d211d2dfa34551b90f0c10c58fdb65b63cd" address="unix:///run/containerd/s/21496efb2be254d32b19cec40d0f9ba01ff31efa61fb387b7a12652ab6551c66" protocol=ttrpc version=3
	Nov 23 08:42:14 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:14.282934097Z" level=info msg="StartContainer for \"7c2ec14edc41a4bad3c997ddbc366a57e75740ffe5d4d804776570d3bbf4089a\" returns successfully"
	Nov 23 08:42:14 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:14.297466583Z" level=info msg="StartContainer for \"d28eb2e2ce19649f4947ef6afbf30d211d2dfa34551b90f0c10c58fdb65b63cd\" returns successfully"
	Nov 23 08:42:17 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:17.425584929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:54457203-a4b0-4bfe-b7e6-9804ec70353f,Namespace:default,Attempt:0,}"
	Nov 23 08:42:17 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:17.487893445Z" level=info msg="connecting to shim e4ae249cb52e36b4d0a2f9b31e40d2ac6f561a86c7f5020966174ef8dddb28bd" address="unix:///run/containerd/s/48de018ba0d9ba174f3818878e132ec8a301b930403195fb51f42bfd7ba5e6a1" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:42:17 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:17.543159907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:54457203-a4b0-4bfe-b7e6-9804ec70353f,Namespace:default,Attempt:0,} returns sandbox id \"e4ae249cb52e36b4d0a2f9b31e40d2ac6f561a86c7f5020966174ef8dddb28bd\""
	Nov 23 08:42:17 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:17.545984305Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.686274001Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.688431814Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937188"
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.690941320Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.694195944Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.694856179Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.148586355s"
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.694995922Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.698429970Z" level=info msg="CreateContainer within sandbox \"e4ae249cb52e36b4d0a2f9b31e40d2ac6f561a86c7f5020966174ef8dddb28bd\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.711352120Z" level=info msg="Container 91bc48b43ecc67ffc1bc3f7fdbc911d26a4116b41c47fa062edb6c0dda1555ed: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.724026742Z" level=info msg="CreateContainer within sandbox \"e4ae249cb52e36b4d0a2f9b31e40d2ac6f561a86c7f5020966174ef8dddb28bd\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"91bc48b43ecc67ffc1bc3f7fdbc911d26a4116b41c47fa062edb6c0dda1555ed\""
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.724804821Z" level=info msg="StartContainer for \"91bc48b43ecc67ffc1bc3f7fdbc911d26a4116b41c47fa062edb6c0dda1555ed\""
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.725703310Z" level=info msg="connecting to shim 91bc48b43ecc67ffc1bc3f7fdbc911d26a4116b41c47fa062edb6c0dda1555ed" address="unix:///run/containerd/s/48de018ba0d9ba174f3818878e132ec8a301b930403195fb51f42bfd7ba5e6a1" protocol=ttrpc version=3
	Nov 23 08:42:19 old-k8s-version-180638 containerd[758]: time="2025-11-23T08:42:19.798131338Z" level=info msg="StartContainer for \"91bc48b43ecc67ffc1bc3f7fdbc911d26a4116b41c47fa062edb6c0dda1555ed\" returns successfully"
	Nov 23 08:42:26 old-k8s-version-180638 containerd[758]: E1123 08:42:26.310347     758 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [7c2ec14edc41a4bad3c997ddbc366a57e75740ffe5d4d804776570d3bbf4089a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55800 - 59523 "HINFO IN 7767641017076382384.181717569997239392. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011046589s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-180638
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-180638
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=old-k8s-version-180638
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_41_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:41:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-180638
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:42:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:42:17 +0000   Sun, 23 Nov 2025 08:41:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:42:17 +0000   Sun, 23 Nov 2025 08:41:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:42:17 +0000   Sun, 23 Nov 2025 08:41:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:42:17 +0000   Sun, 23 Nov 2025 08:42:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-180638
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                66eb206b-bbaa-475d-8a79-ca34c9a5fe12
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-q4lbv                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-180638                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         42s
	  kube-system                 kindnet-mrfgl                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-180638             250m (12%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-180638    200m (10%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-dk6g5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-180638             100m (5%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28s   kube-proxy       
	  Normal  Starting                 43s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s   kubelet          Node old-k8s-version-180638 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s   kubelet          Node old-k8s-version-180638 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s   kubelet          Node old-k8s-version-180638 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  42s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           30s   node-controller  Node old-k8s-version-180638 event: Registered Node old-k8s-version-180638 in Controller
	  Normal  NodeReady                16s   kubelet          Node old-k8s-version-180638 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015154] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.511595] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034200] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753844] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.833249] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:37] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/22/fs': -2
	
	
	==> etcd [dd592fa780598a368949db2030306613299c5b0608cf477fbac364062431cf64] <==
	{"level":"info","ts":"2025-11-23T08:41:39.991972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-23T08:41:39.99787Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-23T08:41:40.001646Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T08:41:40.001836Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T08:41:40.002083Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T08:41:40.005961Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T08:41:40.00623Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T08:41:40.257458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-23T08:41:40.25751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-23T08:41:40.257539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-23T08:41:40.257552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-23T08:41:40.257563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-23T08:41:40.257574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-23T08:41:40.257585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-23T08:41:40.268883Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-180638 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T08:41:40.268928Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:41:40.269997Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T08:41:40.270072Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:41:40.279065Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:41:40.280196Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-23T08:41:40.28074Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T08:41:40.280878Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T08:41:40.285959Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:41:40.2918Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:41:40.291928Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 08:42:29 up  1:24,  0 user,  load average: 2.87, 3.94, 3.11
	Linux old-k8s-version-180638 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [75439fed83684fc39ca1dda64cef2644f6e3027bddbd15dff08e7923652250de] <==
	I1123 08:42:03.267849       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:42:03.357718       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 08:42:03.358291       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:42:03.358311       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:42:03.358351       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:42:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:42:03.558800       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:42:03.558877       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:42:03.558907       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:42:03.559938       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:42:03.759127       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:42:03.759212       1 metrics.go:72] Registering metrics
	I1123 08:42:03.759307       1 controller.go:711] "Syncing nftables rules"
	I1123 08:42:13.563082       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:42:13.563142       1 main.go:301] handling current node
	I1123 08:42:23.558928       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:42:23.558968       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9b79849edeb76ebe3d1f35f60331849eb478148607f83d7e7cc04f6a89d49cef] <==
	I1123 08:41:43.257160       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:41:43.264540       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 08:41:43.264801       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 08:41:43.264982       1 aggregator.go:166] initial CRD sync complete...
	I1123 08:41:43.265064       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 08:41:43.265148       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:41:43.265235       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:41:43.265825       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1123 08:41:43.266130       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1123 08:41:43.266262       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1123 08:41:44.072890       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:41:44.080525       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:41:44.080644       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:41:44.750567       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:41:44.801229       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:41:44.903834       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:41:44.910704       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 08:41:44.911792       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 08:41:44.916638       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:41:45.103928       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 08:41:46.604806       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 08:41:46.621587       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:41:46.632358       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1123 08:41:59.838596       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1123 08:41:59.989364       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3a3a4da63be8b591cb08202b3fb1a9b242f87a54811f462f72de264b9c1b565d] <==
	I1123 08:41:59.234069       1 shared_informer.go:318] Caches are synced for cronjob
	I1123 08:41:59.238552       1 shared_informer.go:318] Caches are synced for disruption
	I1123 08:41:59.287773       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 08:41:59.635666       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:41:59.635720       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 08:41:59.643988       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:41:59.852836       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mrfgl"
	I1123 08:41:59.861554       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-dk6g5"
	I1123 08:41:59.994988       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1123 08:42:00.250351       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-q4lbv"
	I1123 08:42:00.296011       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-j889m"
	I1123 08:42:00.327262       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="332.952552ms"
	I1123 08:42:00.350757       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.440283ms"
	I1123 08:42:00.350887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.187µs"
	I1123 08:42:01.812294       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1123 08:42:01.846504       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-j889m"
	I1123 08:42:01.870499       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.780918ms"
	I1123 08:42:01.879884       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.337438ms"
	I1123 08:42:01.882600       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.828µs"
	I1123 08:42:13.666896       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="126.5µs"
	I1123 08:42:13.681803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.242µs"
	I1123 08:42:14.082595       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1123 08:42:14.904458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="120.896µs"
	I1123 08:42:14.939466       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.592228ms"
	I1123 08:42:14.940407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.694µs"
	
	
	==> kube-proxy [a92786aea3fde1301dec08d36ed3b9e913c480310fa0d744d9a1cf2c70d26621] <==
	I1123 08:42:00.964044       1 server_others.go:69] "Using iptables proxy"
	I1123 08:42:01.013279       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1123 08:42:01.120318       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:42:01.122189       1 server_others.go:152] "Using iptables Proxier"
	I1123 08:42:01.122231       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 08:42:01.122239       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 08:42:01.122283       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 08:42:01.122555       1 server.go:846] "Version info" version="v1.28.0"
	I1123 08:42:01.122986       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:42:01.123668       1 config.go:188] "Starting service config controller"
	I1123 08:42:01.123741       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 08:42:01.123783       1 config.go:97] "Starting endpoint slice config controller"
	I1123 08:42:01.123795       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 08:42:01.124686       1 config.go:315] "Starting node config controller"
	I1123 08:42:01.124705       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 08:42:01.224043       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1123 08:42:01.224109       1 shared_informer.go:318] Caches are synced for service config
	I1123 08:42:01.225482       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [81034c6fa713b6148ba16d2f50c50ea8e020311ed53ed7d84f6606a76362fc4f] <==
	W1123 08:41:43.659650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1123 08:41:43.659675       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1123 08:41:43.666035       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1123 08:41:43.666076       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1123 08:41:43.666131       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1123 08:41:43.666152       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1123 08:41:43.666282       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1123 08:41:43.666305       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1123 08:41:43.666372       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1123 08:41:43.666389       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1123 08:41:43.666441       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1123 08:41:43.666456       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1123 08:41:43.666515       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1123 08:41:43.666530       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1123 08:41:43.666580       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1123 08:41:43.666596       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1123 08:41:43.666648       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1123 08:41:43.666663       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1123 08:41:43.666710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1123 08:41:43.666725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1123 08:41:44.500880       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1123 08:41:44.500926       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1123 08:41:44.538573       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1123 08:41:44.538617       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1123 08:41:45.053879       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.155079    1561 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.155695    1561 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.867270    1561 topology_manager.go:215] "Topology Admit Handler" podUID="53d90f3f-687b-45a0-a344-321a75f38a20" podNamespace="kube-system" podName="kindnet-mrfgl"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.882839    1561 topology_manager.go:215] "Topology Admit Handler" podUID="27bc489f-26f8-4848-9df2-6530dcad7423" podNamespace="kube-system" podName="kube-proxy-dk6g5"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.909264    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53d90f3f-687b-45a0-a344-321a75f38a20-xtables-lock\") pod \"kindnet-mrfgl\" (UID: \"53d90f3f-687b-45a0-a344-321a75f38a20\") " pod="kube-system/kindnet-mrfgl"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.909324    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k598w\" (UniqueName: \"kubernetes.io/projected/53d90f3f-687b-45a0-a344-321a75f38a20-kube-api-access-k598w\") pod \"kindnet-mrfgl\" (UID: \"53d90f3f-687b-45a0-a344-321a75f38a20\") " pod="kube-system/kindnet-mrfgl"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.909349    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/27bc489f-26f8-4848-9df2-6530dcad7423-kube-proxy\") pod \"kube-proxy-dk6g5\" (UID: \"27bc489f-26f8-4848-9df2-6530dcad7423\") " pod="kube-system/kube-proxy-dk6g5"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.909373    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27bc489f-26f8-4848-9df2-6530dcad7423-xtables-lock\") pod \"kube-proxy-dk6g5\" (UID: \"27bc489f-26f8-4848-9df2-6530dcad7423\") " pod="kube-system/kube-proxy-dk6g5"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.909397    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27bc489f-26f8-4848-9df2-6530dcad7423-lib-modules\") pod \"kube-proxy-dk6g5\" (UID: \"27bc489f-26f8-4848-9df2-6530dcad7423\") " pod="kube-system/kube-proxy-dk6g5"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.909438    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/53d90f3f-687b-45a0-a344-321a75f38a20-cni-cfg\") pod \"kindnet-mrfgl\" (UID: \"53d90f3f-687b-45a0-a344-321a75f38a20\") " pod="kube-system/kindnet-mrfgl"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.909462    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53d90f3f-687b-45a0-a344-321a75f38a20-lib-modules\") pod \"kindnet-mrfgl\" (UID: \"53d90f3f-687b-45a0-a344-321a75f38a20\") " pod="kube-system/kindnet-mrfgl"
	Nov 23 08:41:59 old-k8s-version-180638 kubelet[1561]: I1123 08:41:59.909488    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djzpr\" (UniqueName: \"kubernetes.io/projected/27bc489f-26f8-4848-9df2-6530dcad7423-kube-api-access-djzpr\") pod \"kube-proxy-dk6g5\" (UID: \"27bc489f-26f8-4848-9df2-6530dcad7423\") " pod="kube-system/kube-proxy-dk6g5"
	Nov 23 08:42:00 old-k8s-version-180638 kubelet[1561]: I1123 08:42:00.902067    1561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dk6g5" podStartSLOduration=1.902024352 podCreationTimestamp="2025-11-23 08:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:42:00.901629237 +0000 UTC m=+14.326420754" watchObservedRunningTime="2025-11-23 08:42:00.902024352 +0000 UTC m=+14.326815869"
	Nov 23 08:42:06 old-k8s-version-180638 kubelet[1561]: I1123 08:42:06.732067    1561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-mrfgl" podStartSLOduration=5.495030012 podCreationTimestamp="2025-11-23 08:41:59 +0000 UTC" firstStartedPulling="2025-11-23 08:42:00.776285773 +0000 UTC m=+14.201077291" lastFinishedPulling="2025-11-23 08:42:03.013276696 +0000 UTC m=+16.438068214" observedRunningTime="2025-11-23 08:42:03.871246056 +0000 UTC m=+17.296037582" watchObservedRunningTime="2025-11-23 08:42:06.732020935 +0000 UTC m=+20.156812461"
	Nov 23 08:42:13 old-k8s-version-180638 kubelet[1561]: I1123 08:42:13.622916    1561 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 23 08:42:13 old-k8s-version-180638 kubelet[1561]: I1123 08:42:13.659700    1561 topology_manager.go:215] "Topology Admit Handler" podUID="9a14996d-e910-4a4f-a6f6-f2d8565a4b9c" podNamespace="kube-system" podName="coredns-5dd5756b68-q4lbv"
	Nov 23 08:42:13 old-k8s-version-180638 kubelet[1561]: I1123 08:42:13.662325    1561 topology_manager.go:215] "Topology Admit Handler" podUID="fa923b06-d896-468f-8e82-51b4e9df88dc" podNamespace="kube-system" podName="storage-provisioner"
	Nov 23 08:42:13 old-k8s-version-180638 kubelet[1561]: I1123 08:42:13.844230    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbqlk\" (UniqueName: \"kubernetes.io/projected/9a14996d-e910-4a4f-a6f6-f2d8565a4b9c-kube-api-access-cbqlk\") pod \"coredns-5dd5756b68-q4lbv\" (UID: \"9a14996d-e910-4a4f-a6f6-f2d8565a4b9c\") " pod="kube-system/coredns-5dd5756b68-q4lbv"
	Nov 23 08:42:13 old-k8s-version-180638 kubelet[1561]: I1123 08:42:13.844293    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsph6\" (UniqueName: \"kubernetes.io/projected/fa923b06-d896-468f-8e82-51b4e9df88dc-kube-api-access-wsph6\") pod \"storage-provisioner\" (UID: \"fa923b06-d896-468f-8e82-51b4e9df88dc\") " pod="kube-system/storage-provisioner"
	Nov 23 08:42:13 old-k8s-version-180638 kubelet[1561]: I1123 08:42:13.844319    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a14996d-e910-4a4f-a6f6-f2d8565a4b9c-config-volume\") pod \"coredns-5dd5756b68-q4lbv\" (UID: \"9a14996d-e910-4a4f-a6f6-f2d8565a4b9c\") " pod="kube-system/coredns-5dd5756b68-q4lbv"
	Nov 23 08:42:13 old-k8s-version-180638 kubelet[1561]: I1123 08:42:13.844356    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fa923b06-d896-468f-8e82-51b4e9df88dc-tmp\") pod \"storage-provisioner\" (UID: \"fa923b06-d896-468f-8e82-51b4e9df88dc\") " pod="kube-system/storage-provisioner"
	Nov 23 08:42:14 old-k8s-version-180638 kubelet[1561]: I1123 08:42:14.902124    1561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-q4lbv" podStartSLOduration=14.902081859 podCreationTimestamp="2025-11-23 08:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:42:14.90156744 +0000 UTC m=+28.326358958" watchObservedRunningTime="2025-11-23 08:42:14.902081859 +0000 UTC m=+28.326873377"
	Nov 23 08:42:17 old-k8s-version-180638 kubelet[1561]: I1123 08:42:17.115903    1561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.115773659 podCreationTimestamp="2025-11-23 08:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:42:14.969166121 +0000 UTC m=+28.393957647" watchObservedRunningTime="2025-11-23 08:42:17.115773659 +0000 UTC m=+30.540565177"
	Nov 23 08:42:17 old-k8s-version-180638 kubelet[1561]: I1123 08:42:17.116261    1561 topology_manager.go:215] "Topology Admit Handler" podUID="54457203-a4b0-4bfe-b7e6-9804ec70353f" podNamespace="default" podName="busybox"
	Nov 23 08:42:17 old-k8s-version-180638 kubelet[1561]: I1123 08:42:17.162701    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rd6c\" (UniqueName: \"kubernetes.io/projected/54457203-a4b0-4bfe-b7e6-9804ec70353f-kube-api-access-5rd6c\") pod \"busybox\" (UID: \"54457203-a4b0-4bfe-b7e6-9804ec70353f\") " pod="default/busybox"
	
	
	==> storage-provisioner [d28eb2e2ce19649f4947ef6afbf30d211d2dfa34551b90f0c10c58fdb65b63cd] <==
	I1123 08:42:14.310989       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:42:14.328878       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:42:14.329183       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 08:42:14.342255       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:42:14.342845       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46657fa0-d0c5-44e7-b4c5-6303b10aff5f", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-180638_72561ecd-5ccf-4007-bbe6-862fc9539cb1 became leader
	I1123 08:42:14.342920       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-180638_72561ecd-5ccf-4007-bbe6-862fc9539cb1!
	I1123 08:42:14.443997       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-180638_72561ecd-5ccf-4007-bbe6-862fc9539cb1!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-180638 -n old-k8s-version-180638
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-180638 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (14.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-596617 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9b93317d-72f3-440c-9896-cb6d0b98f255] Pending
helpers_test.go:352: "busybox" [9b93317d-72f3-440c-9896-cb6d0b98f255] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9b93317d-72f3-440c-9896-cb6d0b98f255] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004142214s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-596617 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-596617
helpers_test.go:243: (dbg) docker inspect no-preload-596617:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4a24325fbe794cf5f60d926fae91f7d761a86b894e0fe3b550364fd00fa8535",
	        "Created": "2025-11-23T08:43:50.62986252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 205837,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:43:50.732730436Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/a4a24325fbe794cf5f60d926fae91f7d761a86b894e0fe3b550364fd00fa8535/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4a24325fbe794cf5f60d926fae91f7d761a86b894e0fe3b550364fd00fa8535/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4a24325fbe794cf5f60d926fae91f7d761a86b894e0fe3b550364fd00fa8535/hosts",
	        "LogPath": "/var/lib/docker/containers/a4a24325fbe794cf5f60d926fae91f7d761a86b894e0fe3b550364fd00fa8535/a4a24325fbe794cf5f60d926fae91f7d761a86b894e0fe3b550364fd00fa8535-json.log",
	        "Name": "/no-preload-596617",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-596617:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-596617",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a4a24325fbe794cf5f60d926fae91f7d761a86b894e0fe3b550364fd00fa8535",
	                "LowerDir": "/var/lib/docker/overlay2/3cf064fffc5f7850a69c9f83c5fdbcf4caf517683876e249afa8ec526609f9fa-init/diff:/var/lib/docker/overlay2/88c30082a717909d357f7d81c88a05ce3487a40d372ee6dc57fb9f012e0502da/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3cf064fffc5f7850a69c9f83c5fdbcf4caf517683876e249afa8ec526609f9fa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3cf064fffc5f7850a69c9f83c5fdbcf4caf517683876e249afa8ec526609f9fa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3cf064fffc5f7850a69c9f83c5fdbcf4caf517683876e249afa8ec526609f9fa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-596617",
	                "Source": "/var/lib/docker/volumes/no-preload-596617/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-596617",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-596617",
	                "name.minikube.sigs.k8s.io": "no-preload-596617",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "00b92314d34cf5857c066ae60db5365912921d8c4d66561bf3f3463cb270b201",
	            "SandboxKey": "/var/run/docker/netns/00b92314d34c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-596617": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:19:98:c6:9b:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "51e4af3ac2a76ea2ea64d1c486af05de7ac03b53a1cfb84aeab01a138e31c84c",
	                    "EndpointID": "b3ad4551d70f7ca491a8bcfac13f2b9967037a34fb51e45b38d9dd8afc1ceaf8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-596617",
	                        "a4a24325fbe7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-596617 -n no-preload-596617
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-596617 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-596617 logs -n 25: (1.263253063s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-440243 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo crio config                                                                                                                                                                                                                   │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ delete  │ -p cilium-440243                                                                                                                                                                                                                                    │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │ 23 Nov 25 08:39 UTC │
	│ start   │ -p cert-expiration-119748 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-119748   │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │ 23 Nov 25 08:40 UTC │
	│ ssh     │ force-systemd-env-760522 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-760522 │ jenkins │ v1.37.0 │ 23 Nov 25 08:40 UTC │ 23 Nov 25 08:40 UTC │
	│ delete  │ -p force-systemd-env-760522                                                                                                                                                                                                                         │ force-systemd-env-760522 │ jenkins │ v1.37.0 │ 23 Nov 25 08:40 UTC │ 23 Nov 25 08:40 UTC │
	│ start   │ -p cert-options-106536 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:40 UTC │ 23 Nov 25 08:41 UTC │
	│ ssh     │ cert-options-106536 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ ssh     │ -p cert-options-106536 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ delete  │ -p cert-options-106536                                                                                                                                                                                                                              │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ start   │ -p old-k8s-version-180638 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:42 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-180638 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ stop    │ -p old-k8s-version-180638 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-180638 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ start   │ -p old-k8s-version-180638 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:43 UTC │
	│ start   │ -p cert-expiration-119748 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-119748   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p cert-expiration-119748                                                                                                                                                                                                                           │ cert-expiration-119748   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ start   │ -p no-preload-596617 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-596617        │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ old-k8s-version-180638 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ pause   │ -p old-k8s-version-180638 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ unpause │ -p old-k8s-version-180638 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p old-k8s-version-180638                                                                                                                                                                                                                           │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p old-k8s-version-180638                                                                                                                                                                                                                           │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ start   │ -p embed-certs-230843 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-230843       │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:43:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:43:58.820393  208070 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:43:58.820553  208070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:43:58.820559  208070 out.go:374] Setting ErrFile to fd 2...
	I1123 08:43:58.820564  208070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:43:58.820832  208070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
	I1123 08:43:58.821280  208070 out.go:368] Setting JSON to false
	I1123 08:43:58.822319  208070 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5188,"bootTime":1763882251,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 08:43:58.822477  208070 start.go:143] virtualization:  
	I1123 08:43:58.828335  208070 out.go:179] * [embed-certs-230843] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:43:58.831686  208070 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:43:58.831751  208070 notify.go:221] Checking for updates...
	I1123 08:43:58.838329  208070 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:43:58.841475  208070 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:43:58.844958  208070 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube
	I1123 08:43:58.848024  208070 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:43:58.850992  208070 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:43:58.854518  208070 config.go:182] Loaded profile config "no-preload-596617": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:43:58.854632  208070 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:43:58.895036  208070 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:43:58.895154  208070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:43:59.011215  208070 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-23 08:43:59.000339025 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:43:59.011336  208070 docker.go:319] overlay module found
	I1123 08:43:59.014535  208070 out.go:179] * Using the docker driver based on user configuration
	I1123 08:43:59.017573  208070 start.go:309] selected driver: docker
	I1123 08:43:59.017601  208070 start.go:927] validating driver "docker" against <nil>
	I1123 08:43:59.017627  208070 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:43:59.018307  208070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:43:59.147104  208070 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-23 08:43:59.137186105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:43:59.147255  208070 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:43:59.147482  208070 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:43:59.150526  208070 out.go:179] * Using Docker driver with root privileges
	I1123 08:43:59.153370  208070 cni.go:84] Creating CNI manager for ""
	I1123 08:43:59.153454  208070 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:43:59.153468  208070 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:43:59.153541  208070 start.go:353] cluster config:
	{Name:embed-certs-230843 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-230843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:43:59.156589  208070 out.go:179] * Starting "embed-certs-230843" primary control-plane node in "embed-certs-230843" cluster
	I1123 08:43:59.159437  208070 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:43:59.162459  208070 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:43:59.165295  208070 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:43:59.165347  208070 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1123 08:43:59.165357  208070 cache.go:65] Caching tarball of preloaded images
	I1123 08:43:59.165392  208070 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:43:59.165531  208070 preload.go:238] Found /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1123 08:43:59.165541  208070 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 08:43:59.165651  208070 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/config.json ...
	I1123 08:43:59.165677  208070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/config.json: {Name:mk4d6baf73ed74f8398c7a685c69000ceb39bedf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:59.194302  208070 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:43:59.194327  208070 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:43:59.194343  208070 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:43:59.194371  208070 start.go:360] acquireMachinesLock for embed-certs-230843: {Name:mk7c64cffb325c304ae7da75fe620432eaf24373 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:59.194477  208070 start.go:364] duration metric: took 86.975µs to acquireMachinesLock for "embed-certs-230843"
	I1123 08:43:59.194508  208070 start.go:93] Provisioning new machine with config: &{Name:embed-certs-230843 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-230843 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:43:59.194586  208070 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:43:58.436437  205527 cli_runner.go:164] Run: docker network inspect no-preload-596617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:43:58.456601  205527 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:43:58.461020  205527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:43:58.474078  205527 kubeadm.go:884] updating cluster {Name:no-preload-596617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-596617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:43:58.474186  205527 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:43:58.474240  205527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:43:58.502532  205527 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1123 08:43:58.502558  205527 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1123 08:43:58.502636  205527 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:58.502869  205527 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:58.503526  205527 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:58.503878  205527 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:58.504625  205527 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:58.505640  205527 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:58.505898  205527 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:58.505908  205527 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1123 08:43:58.507486  205527 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:58.509051  205527 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:58.510092  205527 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:58.510407  205527 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:58.510531  205527 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:58.510711  205527 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:58.510754  205527 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:58.510092  205527 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1123 08:43:58.731935  205527 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0"
	I1123 08:43:58.732012  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:58.748605  205527 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1123 08:43:58.748755  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1123 08:43:58.750486  205527 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e"
	I1123 08:43:58.750548  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:58.757704  205527 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1123 08:43:58.757742  205527 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:58.757787  205527 ssh_runner.go:195] Run: which crictl
	I1123 08:43:58.766799  205527 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc"
	I1123 08:43:58.766862  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:58.770747  205527 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196"
	I1123 08:43:58.770812  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:58.771443  205527 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9"
	I1123 08:43:58.771484  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:58.818696  205527 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a"
	I1123 08:43:58.818766  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:58.824233  205527 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1123 08:43:58.824271  205527 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1123 08:43:58.824318  205527 ssh_runner.go:195] Run: which crictl
	I1123 08:43:58.824397  205527 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1123 08:43:58.824412  205527 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:58.824435  205527 ssh_runner.go:195] Run: which crictl
	I1123 08:43:58.824508  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:58.869154  205527 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1123 08:43:58.869194  205527 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:58.869243  205527 ssh_runner.go:195] Run: which crictl
	I1123 08:43:58.869295  205527 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1123 08:43:58.869307  205527 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:58.869327  205527 ssh_runner.go:195] Run: which crictl
	I1123 08:43:58.880104  205527 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1123 08:43:58.880157  205527 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:58.880213  205527 ssh_runner.go:195] Run: which crictl
	I1123 08:43:58.923474  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:58.923527  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:43:58.923639  205527 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1123 08:43:58.923668  205527 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:58.923698  205527 ssh_runner.go:195] Run: which crictl
	I1123 08:43:58.934601  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:58.934660  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:58.934703  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:58.934738  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:58.934838  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:59.100632  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:43:59.100844  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:59.100992  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:59.127697  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:59.127840  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:59.128054  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:59.128147  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:59.300226  205527 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1123 08:43:59.300325  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1123 08:43:59.300404  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:43:59.300460  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:59.329327  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:59.329499  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:59.329568  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:59.329620  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:59.396955  205527 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1123 08:43:59.397058  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:43:59.397120  205527 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1123 08:43:59.397138  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1123 08:43:59.397178  205527 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1123 08:43:59.397224  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1123 08:43:59.197893  208070 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:43:59.198118  208070 start.go:159] libmachine.API.Create for "embed-certs-230843" (driver="docker")
	I1123 08:43:59.198157  208070 client.go:173] LocalClient.Create starting
	I1123 08:43:59.198277  208070 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem
	I1123 08:43:59.198319  208070 main.go:143] libmachine: Decoding PEM data...
	I1123 08:43:59.198342  208070 main.go:143] libmachine: Parsing certificate...
	I1123 08:43:59.198395  208070 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem
	I1123 08:43:59.198417  208070 main.go:143] libmachine: Decoding PEM data...
	I1123 08:43:59.198433  208070 main.go:143] libmachine: Parsing certificate...
	I1123 08:43:59.198803  208070 cli_runner.go:164] Run: docker network inspect embed-certs-230843 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:43:59.215653  208070 cli_runner.go:211] docker network inspect embed-certs-230843 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:43:59.215745  208070 network_create.go:284] running [docker network inspect embed-certs-230843] to gather additional debugging logs...
	I1123 08:43:59.215762  208070 cli_runner.go:164] Run: docker network inspect embed-certs-230843
	W1123 08:43:59.234026  208070 cli_runner.go:211] docker network inspect embed-certs-230843 returned with exit code 1
	I1123 08:43:59.234055  208070 network_create.go:287] error running [docker network inspect embed-certs-230843]: docker network inspect embed-certs-230843: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-230843 not found
	I1123 08:43:59.234070  208070 network_create.go:289] output of [docker network inspect embed-certs-230843]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-230843 not found
	
	** /stderr **
	I1123 08:43:59.234159  208070 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:43:59.254328  208070 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a946cc9c0edf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:ea:52:17:a9:7a} reservation:<nil>}
	I1123 08:43:59.254644  208070 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fb33daef15c9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:08:1d:d1:c6:df} reservation:<nil>}
	I1123 08:43:59.254975  208070 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb61edac6088 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e6:64:59:e2:c3:5a} reservation:<nil>}
	I1123 08:43:59.255396  208070 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ccbb0}
	I1123 08:43:59.255414  208070 network_create.go:124] attempt to create docker network embed-certs-230843 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 08:43:59.255470  208070 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-230843 embed-certs-230843
	I1123 08:43:59.334308  208070 network_create.go:108] docker network embed-certs-230843 192.168.76.0/24 created
	I1123 08:43:59.334339  208070 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-230843" container
	I1123 08:43:59.334426  208070 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:43:59.359885  208070 cli_runner.go:164] Run: docker volume create embed-certs-230843 --label name.minikube.sigs.k8s.io=embed-certs-230843 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:43:59.382840  208070 oci.go:103] Successfully created a docker volume embed-certs-230843
	I1123 08:43:59.382935  208070 cli_runner.go:164] Run: docker run --rm --name embed-certs-230843-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-230843 --entrypoint /usr/bin/test -v embed-certs-230843:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:44:00.279925  208070 oci.go:107] Successfully prepared a docker volume embed-certs-230843
	I1123 08:44:00.280008  208070 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:44:00.280024  208070 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:44:00.280111  208070 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-230843:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 08:43:59.517536  205527 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1123 08:43:59.517637  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:43:59.517695  205527 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1123 08:43:59.517743  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:43:59.517788  205527 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1123 08:43:59.517833  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:43:59.517878  205527 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1123 08:43:59.517921  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:43:59.517967  205527 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1123 08:43:59.517983  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1123 08:43:59.518020  205527 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1123 08:43:59.518033  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1123 08:43:59.583417  205527 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1123 08:43:59.583452  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1123 08:43:59.583493  205527 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1123 08:43:59.583504  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1123 08:43:59.583532  205527 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1123 08:43:59.583551  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1123 08:43:59.583579  205527 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1123 08:43:59.583592  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1123 08:43:59.700483  205527 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1123 08:43:59.701931  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1123 08:43:59.968550  205527 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1123 08:43:59.968686  205527 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1123 08:43:59.968748  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:00.112305  205527 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1123 08:44:00.181960  205527 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1123 08:44:00.182069  205527 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:00.182314  205527 ssh_runner.go:195] Run: which crictl
	I1123 08:44:00.273861  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:00.440733  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:00.471719  205527 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1123 08:44:00.471798  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1123 08:44:00.584201  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:02.492982  205527 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.021158278s)
	I1123 08:44:02.493052  205527 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1123 08:44:02.493087  205527 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:44:02.493164  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:44:02.493253  205527 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.909026288s)
	I1123 08:44:02.493314  205527 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1123 08:44:02.493464  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1123 08:44:04.398900  205527 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.90539484s)
	I1123 08:44:04.398935  205527 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1123 08:44:04.398961  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1123 08:44:04.399012  205527 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.905817918s)
	I1123 08:44:04.399026  205527 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1123 08:44:04.399052  205527 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:44:04.399096  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:44:06.167290  208070 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-230843:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.887129254s)
	I1123 08:44:06.167318  208070 kic.go:203] duration metric: took 5.887290897s to extract preloaded images to volume ...
	W1123 08:44:06.167453  208070 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:44:06.167554  208070 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:44:06.252841  208070 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-230843 --name embed-certs-230843 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-230843 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-230843 --network embed-certs-230843 --ip 192.168.76.2 --volume embed-certs-230843:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:44:06.697562  208070 cli_runner.go:164] Run: docker container inspect embed-certs-230843 --format={{.State.Running}}
	I1123 08:44:06.726066  208070 cli_runner.go:164] Run: docker container inspect embed-certs-230843 --format={{.State.Status}}
	I1123 08:44:06.754105  208070 cli_runner.go:164] Run: docker exec embed-certs-230843 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:44:06.820360  208070 oci.go:144] the created container "embed-certs-230843" has a running status.
	I1123 08:44:06.820392  208070 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-2339/.minikube/machines/embed-certs-230843/id_rsa...
	I1123 08:44:07.367947  208070 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-2339/.minikube/machines/embed-certs-230843/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:44:07.405852  208070 cli_runner.go:164] Run: docker container inspect embed-certs-230843 --format={{.State.Status}}
	I1123 08:44:07.439306  208070 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:44:07.439326  208070 kic_runner.go:114] Args: [docker exec --privileged embed-certs-230843 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:44:07.546025  208070 cli_runner.go:164] Run: docker container inspect embed-certs-230843 --format={{.State.Status}}
	I1123 08:44:07.572710  208070 machine.go:94] provisionDockerMachine start ...
	I1123 08:44:07.572805  208070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-230843
	I1123 08:44:07.602226  208070 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:07.602552  208070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1123 08:44:07.602562  208070 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:44:07.603380  208070 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51518->127.0.0.1:33068: read: connection reset by peer
	I1123 08:44:07.080178  205527 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1: (2.681055363s)
	I1123 08:44:07.080202  205527 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1123 08:44:07.080220  205527 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:44:07.080264  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:44:08.315422  205527 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.235136614s)
	I1123 08:44:08.315452  205527 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1123 08:44:08.315470  205527 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:44:08.315515  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:44:10.769814  208070 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-230843
	
	I1123 08:44:10.769904  208070 ubuntu.go:182] provisioning hostname "embed-certs-230843"
	I1123 08:44:10.769999  208070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-230843
	I1123 08:44:10.791161  208070 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:10.791487  208070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1123 08:44:10.791503  208070 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-230843 && echo "embed-certs-230843" | sudo tee /etc/hostname
	I1123 08:44:10.963401  208070 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-230843
	
	I1123 08:44:10.963550  208070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-230843
	I1123 08:44:10.985988  208070 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:10.986321  208070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1123 08:44:10.986337  208070 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-230843' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-230843/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-230843' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:44:11.154222  208070 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:44:11.154254  208070 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-2339/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-2339/.minikube}
	I1123 08:44:11.154285  208070 ubuntu.go:190] setting up certificates
	I1123 08:44:11.154294  208070 provision.go:84] configureAuth start
	I1123 08:44:11.154355  208070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-230843
	I1123 08:44:11.180464  208070 provision.go:143] copyHostCerts
	I1123 08:44:11.180527  208070 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem, removing ...
	I1123 08:44:11.180536  208070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem
	I1123 08:44:11.180608  208070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem (1078 bytes)
	I1123 08:44:11.180708  208070 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem, removing ...
	I1123 08:44:11.180714  208070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem
	I1123 08:44:11.180779  208070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem (1123 bytes)
	I1123 08:44:11.180880  208070 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem, removing ...
	I1123 08:44:11.180890  208070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem
	I1123 08:44:11.180927  208070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem (1675 bytes)
	I1123 08:44:11.180985  208070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem org=jenkins.embed-certs-230843 san=[127.0.0.1 192.168.76.2 embed-certs-230843 localhost minikube]
	I1123 08:44:11.380799  208070 provision.go:177] copyRemoteCerts
	I1123 08:44:11.380857  208070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:44:11.380909  208070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-230843
	I1123 08:44:11.397936  208070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/embed-certs-230843/id_rsa Username:docker}
	I1123 08:44:11.513153  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:44:11.535766  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:44:11.555866  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 08:44:11.578173  208070 provision.go:87] duration metric: took 423.857195ms to configureAuth
	I1123 08:44:11.578201  208070 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:44:11.578383  208070 config.go:182] Loaded profile config "embed-certs-230843": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:44:11.578397  208070 machine.go:97] duration metric: took 4.005668909s to provisionDockerMachine
	I1123 08:44:11.578406  208070 client.go:176] duration metric: took 12.380236941s to LocalClient.Create
	I1123 08:44:11.578420  208070 start.go:167] duration metric: took 12.3803031s to libmachine.API.Create "embed-certs-230843"
	I1123 08:44:11.578432  208070 start.go:293] postStartSetup for "embed-certs-230843" (driver="docker")
	I1123 08:44:11.578441  208070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:44:11.578492  208070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:44:11.578532  208070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-230843
	I1123 08:44:11.598372  208070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/embed-certs-230843/id_rsa Username:docker}
	I1123 08:44:11.710512  208070 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:44:11.714324  208070 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:44:11.714355  208070 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:44:11.714366  208070 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-2339/.minikube/addons for local assets ...
	I1123 08:44:11.714420  208070 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-2339/.minikube/files for local assets ...
	I1123 08:44:11.714501  208070 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem -> 41512.pem in /etc/ssl/certs
	I1123 08:44:11.714610  208070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:44:11.723788  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem --> /etc/ssl/certs/41512.pem (1708 bytes)
	I1123 08:44:11.745314  208070 start.go:296] duration metric: took 166.868593ms for postStartSetup
	I1123 08:44:11.745692  208070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-230843
	I1123 08:44:11.767363  208070 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/config.json ...
	I1123 08:44:11.767699  208070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:44:11.767784  208070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-230843
	I1123 08:44:11.791138  208070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/embed-certs-230843/id_rsa Username:docker}
	I1123 08:44:11.902467  208070 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:44:11.908289  208070 start.go:128] duration metric: took 12.713686598s to createHost
	I1123 08:44:11.908320  208070 start.go:83] releasing machines lock for "embed-certs-230843", held for 12.713824618s
	I1123 08:44:11.908397  208070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-230843
	I1123 08:44:11.927081  208070 ssh_runner.go:195] Run: cat /version.json
	I1123 08:44:11.927166  208070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-230843
	I1123 08:44:11.927332  208070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:44:11.927444  208070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-230843
	I1123 08:44:11.962785  208070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/embed-certs-230843/id_rsa Username:docker}
	I1123 08:44:11.969210  208070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/embed-certs-230843/id_rsa Username:docker}
	I1123 08:44:12.077248  208070 ssh_runner.go:195] Run: systemctl --version
	I1123 08:44:12.174947  208070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:44:12.181170  208070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:44:12.181252  208070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:44:12.214289  208070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:44:12.214352  208070 start.go:496] detecting cgroup driver to use...
	I1123 08:44:12.214401  208070 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:44:12.214479  208070 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:44:12.230990  208070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:44:12.246483  208070 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:44:12.246552  208070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:44:12.265768  208070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:44:12.286028  208070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:44:12.437707  208070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:44:12.599250  208070 docker.go:234] disabling docker service ...
	I1123 08:44:12.599316  208070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:44:12.626144  208070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:44:12.640088  208070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:44:12.794516  208070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:44:12.945619  208070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:44:12.960745  208070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:44:12.980037  208070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 08:44:12.990289  208070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:44:13.000153  208070 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 08:44:13.000290  208070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 08:44:13.011343  208070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:44:13.021540  208070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:44:13.031323  208070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:44:13.041138  208070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:44:13.050121  208070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:44:13.059766  208070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:44:13.069313  208070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:44:13.079315  208070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:44:13.088091  208070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:44:13.096340  208070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:13.254505  208070 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:44:13.432840  208070 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:44:13.432964  208070 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:44:13.437268  208070 start.go:564] Will wait 60s for crictl version
	I1123 08:44:13.437380  208070 ssh_runner.go:195] Run: which crictl
	I1123 08:44:13.447283  208070 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:44:13.496735  208070 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:44:13.496850  208070 ssh_runner.go:195] Run: containerd --version
	I1123 08:44:13.518598  208070 ssh_runner.go:195] Run: containerd --version
	I1123 08:44:13.547197  208070 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 08:44:13.550219  208070 cli_runner.go:164] Run: docker network inspect embed-certs-230843 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:44:13.569816  208070 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 08:44:13.573664  208070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:44:13.583749  208070 kubeadm.go:884] updating cluster {Name:embed-certs-230843 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-230843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:44:13.583869  208070 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:44:13.583940  208070 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:44:13.617559  208070 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:44:13.617582  208070 containerd.go:534] Images already preloaded, skipping extraction
	I1123 08:44:13.617646  208070 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:44:13.655825  208070 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:44:13.655846  208070 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:44:13.655853  208070 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1123 08:44:13.655954  208070 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-230843 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-230843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:44:13.656015  208070 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:44:13.687279  208070 cni.go:84] Creating CNI manager for ""
	I1123 08:44:13.687302  208070 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:44:13.687349  208070 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:44:13.687371  208070 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-230843 NodeName:embed-certs-230843 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:44:13.687487  208070 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-230843"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:44:13.687556  208070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:44:13.695772  208070 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:44:13.695844  208070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:44:13.703947  208070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1123 08:44:13.716967  208070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:44:13.729959  208070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1123 08:44:13.742910  208070 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:44:13.746652  208070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:44:13.756101  208070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:09.621354  205527 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.305801584s)
	I1123 08:44:09.621382  205527 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1123 08:44:09.621401  205527 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:44:09.621472  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:44:13.370517  205527 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (3.749015759s)
	I1123 08:44:13.370541  205527 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1123 08:44:13.370558  205527 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1123 08:44:13.370611  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1123 08:44:13.906030  205527 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1123 08:44:13.906062  205527 cache_images.go:125] Successfully loaded all cached images
	I1123 08:44:13.906067  205527 cache_images.go:94] duration metric: took 15.403497458s to LoadCachedImages
	I1123 08:44:13.906078  205527 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1123 08:44:13.906170  205527 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-596617 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-596617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:44:13.906242  205527 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:44:13.946984  205527 cni.go:84] Creating CNI manager for ""
	I1123 08:44:13.947007  205527 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:44:13.947020  205527 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:44:13.947052  205527 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-596617 NodeName:no-preload-596617 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:44:13.947161  205527 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-596617"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:44:13.947226  205527 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:44:13.967989  205527 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1123 08:44:13.968052  205527 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1123 08:44:13.994704  205527 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1123 08:44:13.994806  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1123 08:44:13.995443  205527 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1123 08:44:13.996813  205527 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1123 08:44:14.000863  205527 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1123 08:44:14.000908  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1123 08:44:13.910292  208070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:44:13.934693  208070 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843 for IP: 192.168.76.2
	I1123 08:44:13.934731  208070 certs.go:195] generating shared ca certs ...
	I1123 08:44:13.934748  208070 certs.go:227] acquiring lock for ca certs: {Name:mke0fc62f41acbef5eb3e84af3a3b8f9858bd1fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:13.934926  208070 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-2339/.minikube/ca.key
	I1123 08:44:13.934990  208070 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.key
	I1123 08:44:13.935003  208070 certs.go:257] generating profile certs ...
	I1123 08:44:13.935076  208070 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/client.key
	I1123 08:44:13.935097  208070 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/client.crt with IP's: []
	I1123 08:44:14.136806  208070 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/client.crt ...
	I1123 08:44:14.136886  208070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/client.crt: {Name:mk7188df987ff6201384ec199772dd4ba2c8d80e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:14.137128  208070 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/client.key ...
	I1123 08:44:14.137160  208070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/client.key: {Name:mk256112ada63c93b50ab366f3ed122fe54cce84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:14.138908  208070 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.key.1bb9a82d
	I1123 08:44:14.138983  208070 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.crt.1bb9a82d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 08:44:14.352305  208070 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.crt.1bb9a82d ...
	I1123 08:44:14.352386  208070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.crt.1bb9a82d: {Name:mke4d3bc434cd23a88cb8e2b92d52db45b473ee4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:14.353262  208070 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.key.1bb9a82d ...
	I1123 08:44:14.353319  208070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.key.1bb9a82d: {Name:mka627bf1e131bf980036887cf099a66b966a4a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:14.353529  208070 certs.go:382] copying /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.crt.1bb9a82d -> /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.crt
	I1123 08:44:14.353727  208070 certs.go:386] copying /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.key.1bb9a82d -> /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.key
	I1123 08:44:14.353838  208070 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/proxy-client.key
	I1123 08:44:14.353863  208070 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/proxy-client.crt with IP's: []
	I1123 08:44:14.949563  208070 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/proxy-client.crt ...
	I1123 08:44:14.949638  208070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/proxy-client.crt: {Name:mkf579b75d4f60dc245cbcfdbb33a19b5632d08e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:14.950542  208070 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/proxy-client.key ...
	I1123 08:44:14.950629  208070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/proxy-client.key: {Name:mk8b6c7d7f8460c1be9624ae5aac0c3d889446c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:14.950874  208070 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151.pem (1338 bytes)
	W1123 08:44:14.950947  208070 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151_empty.pem, impossibly tiny 0 bytes
	I1123 08:44:14.950972  208070 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 08:44:14.951059  208070 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:44:14.951122  208070 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:44:14.951172  208070 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem (1675 bytes)
	I1123 08:44:14.951262  208070 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem (1708 bytes)
	I1123 08:44:14.951932  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:44:14.972816  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:44:14.993691  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:44:15.018483  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:44:15.043505  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 08:44:15.067215  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:44:15.093679  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:44:15.125792  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:44:15.166087  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151.pem --> /usr/share/ca-certificates/4151.pem (1338 bytes)
	I1123 08:44:15.211096  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem --> /usr/share/ca-certificates/41512.pem (1708 bytes)
	I1123 08:44:15.257002  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:44:15.300930  208070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:44:15.322110  208070 ssh_runner.go:195] Run: openssl version
	I1123 08:44:15.332064  208070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4151.pem && ln -fs /usr/share/ca-certificates/4151.pem /etc/ssl/certs/4151.pem"
	I1123 08:44:15.342057  208070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4151.pem
	I1123 08:44:15.348824  208070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:02 /usr/share/ca-certificates/4151.pem
	I1123 08:44:15.348888  208070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4151.pem
	I1123 08:44:15.463707  208070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4151.pem /etc/ssl/certs/51391683.0"
	I1123 08:44:15.508947  208070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41512.pem && ln -fs /usr/share/ca-certificates/41512.pem /etc/ssl/certs/41512.pem"
	I1123 08:44:15.532500  208070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41512.pem
	I1123 08:44:15.551114  208070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:02 /usr/share/ca-certificates/41512.pem
	I1123 08:44:15.551184  208070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41512.pem
	I1123 08:44:15.676529  208070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41512.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:44:15.693134  208070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:44:15.709245  208070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:15.716248  208070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:15.716309  208070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:15.809343  208070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:44:15.821220  208070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:44:15.826868  208070 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:44:15.826919  208070 kubeadm.go:401] StartCluster: {Name:embed-certs-230843 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-230843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:15.826989  208070 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:44:15.827057  208070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:44:15.865090  208070 cri.go:89] found id: ""
	I1123 08:44:15.865202  208070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:44:15.878642  208070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:44:15.888405  208070 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:44:15.888466  208070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:44:15.900396  208070 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:44:15.900464  208070 kubeadm.go:158] found existing configuration files:
	
	I1123 08:44:15.900542  208070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:44:15.910826  208070 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:44:15.910891  208070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:44:15.920058  208070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:44:15.929123  208070 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:44:15.929185  208070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:44:15.938477  208070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:44:15.947791  208070 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:44:15.947908  208070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:44:15.955219  208070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:44:15.968045  208070 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:44:15.968186  208070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:44:15.983401  208070 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:44:16.045836  208070 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:44:16.045982  208070 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:44:16.083873  208070 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:44:16.083984  208070 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:44:16.084048  208070 kubeadm.go:319] OS: Linux
	I1123 08:44:16.084115  208070 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:44:16.084193  208070 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:44:16.084307  208070 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:44:16.084395  208070 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:44:16.084474  208070 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:44:16.084581  208070 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:44:16.084658  208070 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:44:16.084743  208070 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:44:16.084822  208070 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:44:16.217836  208070 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:44:16.218007  208070 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:44:16.218129  208070 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:44:16.225763  208070 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:44:15.145083  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1123 08:44:15.154050  205527 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1123 08:44:15.154143  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1123 08:44:15.227311  205527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:44:15.270427  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1123 08:44:15.286455  205527 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1123 08:44:15.286502  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1123 08:44:15.857789  205527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:44:15.870363  205527 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1123 08:44:15.885356  205527 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:44:15.900816  205527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1123 08:44:15.915802  205527 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:44:15.920804  205527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:44:15.932468  205527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:16.082663  205527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:44:16.106056  205527 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617 for IP: 192.168.85.2
	I1123 08:44:16.106077  205527 certs.go:195] generating shared ca certs ...
	I1123 08:44:16.106094  205527 certs.go:227] acquiring lock for ca certs: {Name:mke0fc62f41acbef5eb3e84af3a3b8f9858bd1fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:16.106239  205527 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-2339/.minikube/ca.key
	I1123 08:44:16.106285  205527 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.key
	I1123 08:44:16.106300  205527 certs.go:257] generating profile certs ...
	I1123 08:44:16.106391  205527 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.key
	I1123 08:44:16.106409  205527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.crt with IP's: []
	I1123 08:44:16.453703  205527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.crt ...
	I1123 08:44:16.453736  205527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.crt: {Name:mk8a5c1b998580c1ce82ec5015c51174aefa7b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:16.454596  205527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.key ...
	I1123 08:44:16.454615  205527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.key: {Name:mkdb069435a55c094282563843318c6e40257347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:16.454733  205527 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.key.5887770e
	I1123 08:44:16.454753  205527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.crt.5887770e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 08:44:16.667725  205527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.crt.5887770e ...
	I1123 08:44:16.667757  205527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.crt.5887770e: {Name:mk402568c6ad009d91b37158736ab0794a8a3e8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:16.668587  205527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.key.5887770e ...
	I1123 08:44:16.668606  205527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.key.5887770e: {Name:mk77785d4cce0ec787eff9ba26527cdbbd934787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:16.668709  205527 certs.go:382] copying /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.crt.5887770e -> /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.crt
	I1123 08:44:16.668792  205527 certs.go:386] copying /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.key.5887770e -> /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.key
	I1123 08:44:16.668856  205527 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.key
	I1123 08:44:16.668879  205527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.crt with IP's: []
	I1123 08:44:16.804060  205527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.crt ...
	I1123 08:44:16.804094  205527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.crt: {Name:mka01e33d777b7c726a0d4f8a624a970b79b1d75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:16.804298  205527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.key ...
	I1123 08:44:16.804313  205527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.key: {Name:mk0f45357a9a0407ad0917e71f5321738dd0f7d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:16.804515  205527 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151.pem (1338 bytes)
	W1123 08:44:16.804565  205527 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151_empty.pem, impossibly tiny 0 bytes
	I1123 08:44:16.804579  205527 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 08:44:16.804607  205527 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:44:16.804636  205527 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:44:16.804665  205527 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem (1675 bytes)
	I1123 08:44:16.804716  205527 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem (1708 bytes)
	I1123 08:44:16.805271  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:44:16.828372  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:44:16.860169  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:44:16.882374  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:44:16.901162  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 08:44:16.925754  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 08:44:16.951923  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:44:16.972100  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 08:44:16.992716  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151.pem --> /usr/share/ca-certificates/4151.pem (1338 bytes)
	I1123 08:44:17.014356  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem --> /usr/share/ca-certificates/41512.pem (1708 bytes)
	I1123 08:44:17.034554  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:44:17.054514  205527 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:44:17.069524  205527 ssh_runner.go:195] Run: openssl version
	I1123 08:44:17.076272  205527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:44:17.085550  205527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:17.089846  205527 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:17.089915  205527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:17.139644  205527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:44:17.152256  205527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4151.pem && ln -fs /usr/share/ca-certificates/4151.pem /etc/ssl/certs/4151.pem"
	I1123 08:44:17.164353  205527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4151.pem
	I1123 08:44:17.169138  205527 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:02 /usr/share/ca-certificates/4151.pem
	I1123 08:44:17.169211  205527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4151.pem
	I1123 08:44:17.211085  205527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4151.pem /etc/ssl/certs/51391683.0"
	I1123 08:44:17.231011  205527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41512.pem && ln -fs /usr/share/ca-certificates/41512.pem /etc/ssl/certs/41512.pem"
	I1123 08:44:17.240127  205527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41512.pem
	I1123 08:44:17.244326  205527 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:02 /usr/share/ca-certificates/41512.pem
	I1123 08:44:17.244390  205527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41512.pem
	I1123 08:44:17.285962  205527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41512.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:44:17.295061  205527 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:44:17.299351  205527 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:44:17.299404  205527 kubeadm.go:401] StartCluster: {Name:no-preload-596617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-596617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:17.299478  205527 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:44:17.299541  205527 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:44:17.327111  205527 cri.go:89] found id: ""
	I1123 08:44:17.327180  205527 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:44:17.336985  205527 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:44:17.345230  205527 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:44:17.345293  205527 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:44:17.355914  205527 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:44:17.355945  205527 kubeadm.go:158] found existing configuration files:
	
	I1123 08:44:17.355995  205527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:44:17.364986  205527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:44:17.365055  205527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:44:17.373010  205527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:44:17.382693  205527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:44:17.382756  205527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:44:17.390667  205527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:44:17.399297  205527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:44:17.399358  205527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:44:17.407259  205527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:44:17.415994  205527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:44:17.416056  205527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:44:17.424188  205527 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:44:17.470697  205527 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:44:17.471011  205527 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:44:17.497824  205527 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:44:17.497903  205527 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:44:17.497943  205527 kubeadm.go:319] OS: Linux
	I1123 08:44:17.497999  205527 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:44:17.498061  205527 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:44:17.498112  205527 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:44:17.498164  205527 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:44:17.498216  205527 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:44:17.498267  205527 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:44:17.498316  205527 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:44:17.498367  205527 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:44:17.498417  205527 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:44:17.598664  205527 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:44:17.598780  205527 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:44:17.598882  205527 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:44:17.621577  205527 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:44:16.232062  208070 out.go:252]   - Generating certificates and keys ...
	I1123 08:44:16.232252  208070 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:44:16.232375  208070 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:44:17.034972  208070 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:44:17.212443  208070 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:44:17.755959  208070 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:44:18.282388  208070 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:44:17.627775  205527 out.go:252]   - Generating certificates and keys ...
	I1123 08:44:17.627879  205527 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:44:17.627951  205527 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:44:18.901944  205527 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:44:19.242527  205527 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:44:19.382571  205527 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:44:19.745763  208070 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:44:19.754517  208070 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-230843 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:44:20.657812  208070 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:44:20.657949  208070 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-230843 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:44:20.966681  208070 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:44:21.513767  208070 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:44:21.853792  208070 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:44:21.853866  208070 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:44:22.469058  208070 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:44:22.850459  208070 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:44:19.791818  205527 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:44:21.281507  205527 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:44:21.282125  205527 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-596617] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:44:21.482697  205527 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:44:21.483325  205527 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-596617] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:44:21.700162  205527 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:44:23.765540  205527 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:44:24.388665  208070 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:44:24.728651  208070 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:44:25.535462  208070 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:44:25.536454  208070 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:44:25.541769  208070 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:44:24.885030  205527 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:44:24.885700  205527 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:44:24.981908  205527 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:44:25.116233  205527 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:44:26.261289  205527 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:44:27.569320  205527 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:44:28.481763  205527 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:44:28.481866  205527 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:44:28.481937  205527 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:44:25.545154  208070 out.go:252]   - Booting up control plane ...
	I1123 08:44:25.545264  208070 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:44:25.545349  208070 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:44:25.553578  208070 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:44:25.585938  208070 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:44:25.586047  208070 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:44:25.595406  208070 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:44:25.596179  208070 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:44:25.596755  208070 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:44:25.757731  208070 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:44:25.757851  208070 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:44:27.261764  208070 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501629862s
	I1123 08:44:27.262765  208070 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:44:27.262946  208070 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 08:44:27.263667  208070 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:44:27.263789  208070 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:44:28.485513  205527 out.go:252]   - Booting up control plane ...
	I1123 08:44:28.485630  205527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:44:28.485718  205527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:44:28.485792  205527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:44:28.518445  205527 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:44:28.518807  205527 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:44:28.527558  205527 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:44:28.527817  205527 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:44:28.528000  205527 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:44:28.757903  205527 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:44:28.758023  205527 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:44:30.249804  205527 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501494619s
	I1123 08:44:30.252283  205527 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:44:30.252497  205527 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1123 08:44:30.252591  205527 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:44:30.252996  205527 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:44:33.983655  208070 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.719559831s
	I1123 08:44:37.226412  205527 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.973174117s
	I1123 08:44:39.198708  205527 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.9452814s
	I1123 08:44:39.771010  208070 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 12.507763754s
	I1123 08:44:40.632942  208070 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 13.366512561s
	I1123 08:44:40.682010  208070 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:44:40.718781  208070 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:44:40.742339  208070 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:44:40.742572  208070 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-230843 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:44:40.776769  208070 kubeadm.go:319] [bootstrap-token] Using token: iuvxjy.jwvovxgrh4ynkuhf
	I1123 08:44:41.255275  205527 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.002632231s
	I1123 08:44:41.280712  205527 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:44:41.302399  205527 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:44:41.333475  205527 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:44:41.333687  205527 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-596617 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:44:41.361156  205527 kubeadm.go:319] [bootstrap-token] Using token: 97edkx.mk6s7acezl06y535
	I1123 08:44:40.779696  208070 out.go:252]   - Configuring RBAC rules ...
	I1123 08:44:40.779818  208070 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:44:40.785375  208070 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:44:40.796035  208070 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:44:40.804646  208070 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:44:40.812545  208070 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:44:40.817495  208070 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:44:41.038524  208070 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:44:41.474381  208070 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:44:42.042420  208070 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:44:42.042440  208070 kubeadm.go:319] 
	I1123 08:44:42.042517  208070 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:44:42.042521  208070 kubeadm.go:319] 
	I1123 08:44:42.042599  208070 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:44:42.042603  208070 kubeadm.go:319] 
	I1123 08:44:42.042628  208070 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:44:42.042696  208070 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:44:42.042748  208070 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:44:42.042752  208070 kubeadm.go:319] 
	I1123 08:44:42.042806  208070 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:44:42.042809  208070 kubeadm.go:319] 
	I1123 08:44:42.042857  208070 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:44:42.042861  208070 kubeadm.go:319] 
	I1123 08:44:42.042918  208070 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:44:42.042995  208070 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:44:42.043072  208070 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:44:42.043076  208070 kubeadm.go:319] 
	I1123 08:44:42.043163  208070 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:44:42.043240  208070 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:44:42.043246  208070 kubeadm.go:319] 
	I1123 08:44:42.043330  208070 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token iuvxjy.jwvovxgrh4ynkuhf \
	I1123 08:44:42.043433  208070 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4f35f48b47910e0f0424b1b0ace7d03cfc1e6ef5b162b679e98eef4f3a64a5a5 \
	I1123 08:44:42.043453  208070 kubeadm.go:319] 	--control-plane 
	I1123 08:44:42.043457  208070 kubeadm.go:319] 
	I1123 08:44:42.043542  208070 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:44:42.043546  208070 kubeadm.go:319] 
	I1123 08:44:42.043627  208070 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token iuvxjy.jwvovxgrh4ynkuhf \
	I1123 08:44:42.043730  208070 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4f35f48b47910e0f0424b1b0ace7d03cfc1e6ef5b162b679e98eef4f3a64a5a5 
	I1123 08:44:42.049147  208070 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 08:44:42.049553  208070 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 08:44:42.049687  208070 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:44:42.049700  208070 cni.go:84] Creating CNI manager for ""
	I1123 08:44:42.049708  208070 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:44:42.052990  208070 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:44:41.364258  205527 out.go:252]   - Configuring RBAC rules ...
	I1123 08:44:41.364387  205527 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:44:41.377550  205527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:44:41.392237  205527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:44:41.398358  205527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:44:41.409745  205527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:44:41.414534  205527 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:44:41.665018  205527 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:44:42.149327  205527 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:44:42.666223  205527 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:44:42.667502  205527 kubeadm.go:319] 
	I1123 08:44:42.667573  205527 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:44:42.667578  205527 kubeadm.go:319] 
	I1123 08:44:42.667663  205527 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:44:42.667668  205527 kubeadm.go:319] 
	I1123 08:44:42.667693  205527 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:44:42.667752  205527 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:44:42.667802  205527 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:44:42.667806  205527 kubeadm.go:319] 
	I1123 08:44:42.667859  205527 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:44:42.667863  205527 kubeadm.go:319] 
	I1123 08:44:42.667910  205527 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:44:42.667914  205527 kubeadm.go:319] 
	I1123 08:44:42.667965  205527 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:44:42.668040  205527 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:44:42.668115  205527 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:44:42.668119  205527 kubeadm.go:319] 
	I1123 08:44:42.668203  205527 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:44:42.668279  205527 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:44:42.668283  205527 kubeadm.go:319] 
	I1123 08:44:42.668367  205527 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 97edkx.mk6s7acezl06y535 \
	I1123 08:44:42.668472  205527 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4f35f48b47910e0f0424b1b0ace7d03cfc1e6ef5b162b679e98eef4f3a64a5a5 \
	I1123 08:44:42.668493  205527 kubeadm.go:319] 	--control-plane 
	I1123 08:44:42.668496  205527 kubeadm.go:319] 
	I1123 08:44:42.668582  205527 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:44:42.668586  205527 kubeadm.go:319] 
	I1123 08:44:42.671972  205527 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 97edkx.mk6s7acezl06y535 \
	I1123 08:44:42.672086  205527 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4f35f48b47910e0f0424b1b0ace7d03cfc1e6ef5b162b679e98eef4f3a64a5a5 
	I1123 08:44:42.673580  205527 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 08:44:42.673804  205527 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 08:44:42.673908  205527 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:44:42.673925  205527 cni.go:84] Creating CNI manager for ""
	I1123 08:44:42.673932  205527 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:44:42.677246  205527 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:44:42.055995  208070 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:44:42.065552  208070 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:44:42.065580  208070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:44:42.092663  208070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:44:42.624040  208070 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:44:42.624174  208070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:42.624244  208070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-230843 minikube.k8s.io/updated_at=2025_11_23T08_44_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=embed-certs-230843 minikube.k8s.io/primary=true
	I1123 08:44:43.027510  208070 ops.go:34] apiserver oom_adj: -16
	I1123 08:44:43.027614  208070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:43.527981  208070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:42.680128  205527 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:44:42.685968  205527 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:44:42.685991  205527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:44:42.712176  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:44:43.084923  205527 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:44:43.085041  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:43.085103  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-596617 minikube.k8s.io/updated_at=2025_11_23T08_44_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=no-preload-596617 minikube.k8s.io/primary=true
	I1123 08:44:43.321444  205527 ops.go:34] apiserver oom_adj: -16
	I1123 08:44:43.321545  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:43.822205  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:44.321884  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:44.027776  208070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:44.528512  208070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:45.030313  208070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:45.527732  208070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:45.669462  208070 kubeadm.go:1114] duration metric: took 3.04533404s to wait for elevateKubeSystemPrivileges
	I1123 08:44:45.669502  208070 kubeadm.go:403] duration metric: took 29.842585167s to StartCluster
	I1123 08:44:45.669520  208070 settings.go:142] acquiring lock: {Name:mkfb77243b31dfe604b438e7da3f1bce2ba7b5a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:45.669588  208070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:44:45.670632  208070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/kubeconfig: {Name:mka042f83263da2d190b70c2277735bf705fab5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:45.670836  208070 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:44:45.670982  208070 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:44:45.671192  208070 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:44:45.671259  208070 config.go:182] Loaded profile config "embed-certs-230843": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:44:45.671270  208070 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-230843"
	I1123 08:44:45.671287  208070 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-230843"
	I1123 08:44:45.671295  208070 addons.go:70] Setting default-storageclass=true in profile "embed-certs-230843"
	I1123 08:44:45.671307  208070 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-230843"
	I1123 08:44:45.671311  208070 host.go:66] Checking if "embed-certs-230843" exists ...
	I1123 08:44:45.671601  208070 cli_runner.go:164] Run: docker container inspect embed-certs-230843 --format={{.State.Status}}
	I1123 08:44:45.671756  208070 cli_runner.go:164] Run: docker container inspect embed-certs-230843 --format={{.State.Status}}
	I1123 08:44:45.705042  208070 out.go:179] * Verifying Kubernetes components...
	I1123 08:44:45.722693  208070 addons.go:239] Setting addon default-storageclass=true in "embed-certs-230843"
	I1123 08:44:45.722743  208070 host.go:66] Checking if "embed-certs-230843" exists ...
	I1123 08:44:45.723200  208070 cli_runner.go:164] Run: docker container inspect embed-certs-230843 --format={{.State.Status}}
	I1123 08:44:45.732027  208070 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:44.822635  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:45.321717  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:45.822616  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:46.322263  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:46.822066  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:47.081276  205527 kubeadm.go:1114] duration metric: took 3.996277021s to wait for elevateKubeSystemPrivileges
	I1123 08:44:47.081308  205527 kubeadm.go:403] duration metric: took 29.781909942s to StartCluster
	I1123 08:44:47.081325  205527 settings.go:142] acquiring lock: {Name:mkfb77243b31dfe604b438e7da3f1bce2ba7b5a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:47.081450  205527 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:44:47.082908  205527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/kubeconfig: {Name:mka042f83263da2d190b70c2277735bf705fab5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:47.083151  205527 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:44:47.083421  205527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:44:47.083650  205527 config.go:182] Loaded profile config "no-preload-596617": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:44:47.083575  205527 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:44:47.083672  205527 addons.go:70] Setting storage-provisioner=true in profile "no-preload-596617"
	I1123 08:44:47.083680  205527 addons.go:70] Setting default-storageclass=true in profile "no-preload-596617"
	I1123 08:44:47.083692  205527 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-596617"
	I1123 08:44:47.083693  205527 addons.go:239] Setting addon storage-provisioner=true in "no-preload-596617"
	I1123 08:44:47.083720  205527 host.go:66] Checking if "no-preload-596617" exists ...
	I1123 08:44:47.083983  205527 cli_runner.go:164] Run: docker container inspect no-preload-596617 --format={{.State.Status}}
	I1123 08:44:47.084208  205527 cli_runner.go:164] Run: docker container inspect no-preload-596617 --format={{.State.Status}}
	I1123 08:44:47.089154  205527 out.go:179] * Verifying Kubernetes components...
	I1123 08:44:47.093108  205527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:47.126277  205527 addons.go:239] Setting addon default-storageclass=true in "no-preload-596617"
	I1123 08:44:47.126314  205527 host.go:66] Checking if "no-preload-596617" exists ...
	I1123 08:44:47.126723  205527 cli_runner.go:164] Run: docker container inspect no-preload-596617 --format={{.State.Status}}
	I1123 08:44:47.140121  205527 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:45.732862  208070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:45.746787  208070 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:45.746809  208070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:44:45.746870  208070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-230843
	I1123 08:44:45.761834  208070 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:45.761862  208070 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:44:45.761920  208070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-230843
	I1123 08:44:45.789622  208070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/embed-certs-230843/id_rsa Username:docker}
	I1123 08:44:45.802213  208070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/embed-certs-230843/id_rsa Username:docker}
	I1123 08:44:46.409851  208070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:46.457002  208070 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:44:46.457109  208070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:44:46.529172  208070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:48.581011  208070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.171129183s)
	I1123 08:44:48.581075  208070 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.123947846s)
	I1123 08:44:48.582164  208070 node_ready.go:35] waiting up to 6m0s for node "embed-certs-230843" to be "Ready" ...
	I1123 08:44:48.582493  208070 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.125465322s)
	I1123 08:44:48.582516  208070 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 08:44:48.583747  208070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.05450542s)
	I1123 08:44:48.679888  208070 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 08:44:48.683541  208070 addons.go:530] duration metric: took 3.012345078s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:44:47.144257  205527 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:47.144295  205527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:44:47.144377  205527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:44:47.172114  205527 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:47.172134  205527 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:44:47.172201  205527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:44:47.199200  205527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:44:47.223873  205527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:44:47.941786  205527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:48.030147  205527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:44:48.030298  205527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:44:48.055987  205527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:49.261973  205527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.320107567s)
	I1123 08:44:49.262064  205527 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.231848863s)
	I1123 08:44:49.262084  205527 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 08:44:49.263528  205527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.207474353s)
	I1123 08:44:49.264287  205527 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.23169572s)
	I1123 08:44:49.266043  205527 node_ready.go:35] waiting up to 6m0s for node "no-preload-596617" to be "Ready" ...
	I1123 08:44:49.329816  205527 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 08:44:49.332649  205527 addons.go:530] duration metric: took 2.249069031s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:44:49.087300  208070 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-230843" context rescaled to 1 replicas
	W1123 08:44:50.586227  208070 node_ready.go:57] node "embed-certs-230843" has "Ready":"False" status (will retry)
	W1123 08:44:53.085172  208070 node_ready.go:57] node "embed-certs-230843" has "Ready":"False" status (will retry)
	I1123 08:44:49.767741  205527 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-596617" context rescaled to 1 replicas
	W1123 08:44:51.269363  205527 node_ready.go:57] node "no-preload-596617" has "Ready":"False" status (will retry)
	W1123 08:44:53.768931  205527 node_ready.go:57] node "no-preload-596617" has "Ready":"False" status (will retry)
	W1123 08:44:55.085253  208070 node_ready.go:57] node "embed-certs-230843" has "Ready":"False" status (will retry)
	W1123 08:44:57.085474  208070 node_ready.go:57] node "embed-certs-230843" has "Ready":"False" status (will retry)
	W1123 08:44:56.268932  205527 node_ready.go:57] node "no-preload-596617" has "Ready":"False" status (will retry)
	W1123 08:44:58.768622  205527 node_ready.go:57] node "no-preload-596617" has "Ready":"False" status (will retry)
	W1123 08:44:59.585786  208070 node_ready.go:57] node "embed-certs-230843" has "Ready":"False" status (will retry)
	W1123 08:45:02.085546  208070 node_ready.go:57] node "embed-certs-230843" has "Ready":"False" status (will retry)
	W1123 08:45:00.770654  205527 node_ready.go:57] node "no-preload-596617" has "Ready":"False" status (will retry)
	I1123 08:45:01.281251  205527 node_ready.go:49] node "no-preload-596617" is "Ready"
	I1123 08:45:01.281290  205527 node_ready.go:38] duration metric: took 12.015221271s for node "no-preload-596617" to be "Ready" ...
	I1123 08:45:01.281309  205527 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:45:01.281377  205527 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:45:01.316855  205527 api_server.go:72] duration metric: took 14.23366653s to wait for apiserver process to appear ...
	I1123 08:45:01.316887  205527 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:45:01.316908  205527 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 08:45:01.327003  205527 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 08:45:01.328385  205527 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:01.328417  205527 api_server.go:131] duration metric: took 11.522392ms to wait for apiserver health ...
	I1123 08:45:01.328428  205527 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:01.333329  205527 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:01.333376  205527 system_pods.go:61] "coredns-66bc5c9577-spk2c" [7d69a45e-abdd-4480-8b79-7bb112b3eb7f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:01.333384  205527 system_pods.go:61] "etcd-no-preload-596617" [02c8be1f-eaf7-42ea-95a7-62ab46ad07df] Running
	I1123 08:45:01.333390  205527 system_pods.go:61] "kindnet-68b4f" [1e512ae4-2f16-4e9d-898a-51c754a6d8d7] Running
	I1123 08:45:01.333395  205527 system_pods.go:61] "kube-apiserver-no-preload-596617" [976a1c84-6531-4143-b0de-3e22a2abe7eb] Running
	I1123 08:45:01.333449  205527 system_pods.go:61] "kube-controller-manager-no-preload-596617" [4c89da72-8e49-48f2-a2a3-cc52f957a0dd] Running
	I1123 08:45:01.333459  205527 system_pods.go:61] "kube-proxy-sq84q" [a70ddc44-854e-4253-aa99-0bd199e34d0e] Running
	I1123 08:45:01.333464  205527 system_pods.go:61] "kube-scheduler-no-preload-596617" [9bc661dd-7e92-4eba-b278-3a8a28862c53] Running
	I1123 08:45:01.333473  205527 system_pods.go:61] "storage-provisioner" [bbf4fd29-62c7-49d8-b210-930c2bd6c7b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:01.333487  205527 system_pods.go:74] duration metric: took 5.048629ms to wait for pod list to return data ...
	I1123 08:45:01.333496  205527 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:01.336410  205527 default_sa.go:45] found service account: "default"
	I1123 08:45:01.336444  205527 default_sa.go:55] duration metric: took 2.939943ms for default service account to be created ...
	I1123 08:45:01.336471  205527 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:01.342514  205527 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:01.342597  205527 system_pods.go:89] "coredns-66bc5c9577-spk2c" [7d69a45e-abdd-4480-8b79-7bb112b3eb7f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:01.342624  205527 system_pods.go:89] "etcd-no-preload-596617" [02c8be1f-eaf7-42ea-95a7-62ab46ad07df] Running
	I1123 08:45:01.342670  205527 system_pods.go:89] "kindnet-68b4f" [1e512ae4-2f16-4e9d-898a-51c754a6d8d7] Running
	I1123 08:45:01.342694  205527 system_pods.go:89] "kube-apiserver-no-preload-596617" [976a1c84-6531-4143-b0de-3e22a2abe7eb] Running
	I1123 08:45:01.342714  205527 system_pods.go:89] "kube-controller-manager-no-preload-596617" [4c89da72-8e49-48f2-a2a3-cc52f957a0dd] Running
	I1123 08:45:01.342737  205527 system_pods.go:89] "kube-proxy-sq84q" [a70ddc44-854e-4253-aa99-0bd199e34d0e] Running
	I1123 08:45:01.342771  205527 system_pods.go:89] "kube-scheduler-no-preload-596617" [9bc661dd-7e92-4eba-b278-3a8a28862c53] Running
	I1123 08:45:01.342797  205527 system_pods.go:89] "storage-provisioner" [bbf4fd29-62c7-49d8-b210-930c2bd6c7b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:01.342842  205527 retry.go:31] will retry after 302.46538ms: missing components: kube-dns
	I1123 08:45:01.650129  205527 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:01.650212  205527 system_pods.go:89] "coredns-66bc5c9577-spk2c" [7d69a45e-abdd-4480-8b79-7bb112b3eb7f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:01.650238  205527 system_pods.go:89] "etcd-no-preload-596617" [02c8be1f-eaf7-42ea-95a7-62ab46ad07df] Running
	I1123 08:45:01.650306  205527 system_pods.go:89] "kindnet-68b4f" [1e512ae4-2f16-4e9d-898a-51c754a6d8d7] Running
	I1123 08:45:01.650335  205527 system_pods.go:89] "kube-apiserver-no-preload-596617" [976a1c84-6531-4143-b0de-3e22a2abe7eb] Running
	I1123 08:45:01.650357  205527 system_pods.go:89] "kube-controller-manager-no-preload-596617" [4c89da72-8e49-48f2-a2a3-cc52f957a0dd] Running
	I1123 08:45:01.650379  205527 system_pods.go:89] "kube-proxy-sq84q" [a70ddc44-854e-4253-aa99-0bd199e34d0e] Running
	I1123 08:45:01.650414  205527 system_pods.go:89] "kube-scheduler-no-preload-596617" [9bc661dd-7e92-4eba-b278-3a8a28862c53] Running
	I1123 08:45:01.650438  205527 system_pods.go:89] "storage-provisioner" [bbf4fd29-62c7-49d8-b210-930c2bd6c7b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:01.650467  205527 retry.go:31] will retry after 375.532029ms: missing components: kube-dns
	I1123 08:45:02.048211  205527 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:02.048306  205527 system_pods.go:89] "coredns-66bc5c9577-spk2c" [7d69a45e-abdd-4480-8b79-7bb112b3eb7f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:02.048331  205527 system_pods.go:89] "etcd-no-preload-596617" [02c8be1f-eaf7-42ea-95a7-62ab46ad07df] Running
	I1123 08:45:02.048374  205527 system_pods.go:89] "kindnet-68b4f" [1e512ae4-2f16-4e9d-898a-51c754a6d8d7] Running
	I1123 08:45:02.048401  205527 system_pods.go:89] "kube-apiserver-no-preload-596617" [976a1c84-6531-4143-b0de-3e22a2abe7eb] Running
	I1123 08:45:02.048425  205527 system_pods.go:89] "kube-controller-manager-no-preload-596617" [4c89da72-8e49-48f2-a2a3-cc52f957a0dd] Running
	I1123 08:45:02.048452  205527 system_pods.go:89] "kube-proxy-sq84q" [a70ddc44-854e-4253-aa99-0bd199e34d0e] Running
	I1123 08:45:02.048486  205527 system_pods.go:89] "kube-scheduler-no-preload-596617" [9bc661dd-7e92-4eba-b278-3a8a28862c53] Running
	I1123 08:45:02.048522  205527 system_pods.go:89] "storage-provisioner" [bbf4fd29-62c7-49d8-b210-930c2bd6c7b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:02.048555  205527 retry.go:31] will retry after 443.454233ms: missing components: kube-dns
	I1123 08:45:02.496582  205527 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:02.496614  205527 system_pods.go:89] "coredns-66bc5c9577-spk2c" [7d69a45e-abdd-4480-8b79-7bb112b3eb7f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:02.496620  205527 system_pods.go:89] "etcd-no-preload-596617" [02c8be1f-eaf7-42ea-95a7-62ab46ad07df] Running
	I1123 08:45:02.496632  205527 system_pods.go:89] "kindnet-68b4f" [1e512ae4-2f16-4e9d-898a-51c754a6d8d7] Running
	I1123 08:45:02.496637  205527 system_pods.go:89] "kube-apiserver-no-preload-596617" [976a1c84-6531-4143-b0de-3e22a2abe7eb] Running
	I1123 08:45:02.496642  205527 system_pods.go:89] "kube-controller-manager-no-preload-596617" [4c89da72-8e49-48f2-a2a3-cc52f957a0dd] Running
	I1123 08:45:02.496646  205527 system_pods.go:89] "kube-proxy-sq84q" [a70ddc44-854e-4253-aa99-0bd199e34d0e] Running
	I1123 08:45:02.496650  205527 system_pods.go:89] "kube-scheduler-no-preload-596617" [9bc661dd-7e92-4eba-b278-3a8a28862c53] Running
	I1123 08:45:02.496656  205527 system_pods.go:89] "storage-provisioner" [bbf4fd29-62c7-49d8-b210-930c2bd6c7b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:02.496669  205527 retry.go:31] will retry after 464.392772ms: missing components: kube-dns
	I1123 08:45:02.965614  205527 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:02.965648  205527 system_pods.go:89] "coredns-66bc5c9577-spk2c" [7d69a45e-abdd-4480-8b79-7bb112b3eb7f] Running
	I1123 08:45:02.965656  205527 system_pods.go:89] "etcd-no-preload-596617" [02c8be1f-eaf7-42ea-95a7-62ab46ad07df] Running
	I1123 08:45:02.965661  205527 system_pods.go:89] "kindnet-68b4f" [1e512ae4-2f16-4e9d-898a-51c754a6d8d7] Running
	I1123 08:45:02.965665  205527 system_pods.go:89] "kube-apiserver-no-preload-596617" [976a1c84-6531-4143-b0de-3e22a2abe7eb] Running
	I1123 08:45:02.965670  205527 system_pods.go:89] "kube-controller-manager-no-preload-596617" [4c89da72-8e49-48f2-a2a3-cc52f957a0dd] Running
	I1123 08:45:02.965674  205527 system_pods.go:89] "kube-proxy-sq84q" [a70ddc44-854e-4253-aa99-0bd199e34d0e] Running
	I1123 08:45:02.965677  205527 system_pods.go:89] "kube-scheduler-no-preload-596617" [9bc661dd-7e92-4eba-b278-3a8a28862c53] Running
	I1123 08:45:02.965681  205527 system_pods.go:89] "storage-provisioner" [bbf4fd29-62c7-49d8-b210-930c2bd6c7b4] Running
	I1123 08:45:02.965689  205527 system_pods.go:126] duration metric: took 1.629210644s to wait for k8s-apps to be running ...
	I1123 08:45:02.965701  205527 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:02.965758  205527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:02.985453  205527 system_svc.go:56] duration metric: took 19.742114ms WaitForService to wait for kubelet
	I1123 08:45:02.985481  205527 kubeadm.go:587] duration metric: took 15.902298083s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:02.985499  205527 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:02.988643  205527 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:45:02.988678  205527 node_conditions.go:123] node cpu capacity is 2
	I1123 08:45:02.988692  205527 node_conditions.go:105] duration metric: took 3.187494ms to run NodePressure ...
	I1123 08:45:02.988705  205527 start.go:242] waiting for startup goroutines ...
	I1123 08:45:02.988712  205527 start.go:247] waiting for cluster config update ...
	I1123 08:45:02.988725  205527 start.go:256] writing updated cluster config ...
	I1123 08:45:02.989017  205527 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:02.993812  205527 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:02.997841  205527 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-spk2c" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:03.004721  205527 pod_ready.go:94] pod "coredns-66bc5c9577-spk2c" is "Ready"
	I1123 08:45:03.004756  205527 pod_ready.go:86] duration metric: took 6.885986ms for pod "coredns-66bc5c9577-spk2c" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:03.007413  205527 pod_ready.go:83] waiting for pod "etcd-no-preload-596617" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:03.012962  205527 pod_ready.go:94] pod "etcd-no-preload-596617" is "Ready"
	I1123 08:45:03.012996  205527 pod_ready.go:86] duration metric: took 5.544062ms for pod "etcd-no-preload-596617" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:03.015650  205527 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-596617" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:03.020395  205527 pod_ready.go:94] pod "kube-apiserver-no-preload-596617" is "Ready"
	I1123 08:45:03.020426  205527 pod_ready.go:86] duration metric: took 4.745775ms for pod "kube-apiserver-no-preload-596617" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:03.023005  205527 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-596617" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:03.397709  205527 pod_ready.go:94] pod "kube-controller-manager-no-preload-596617" is "Ready"
	I1123 08:45:03.397742  205527 pod_ready.go:86] duration metric: took 374.711235ms for pod "kube-controller-manager-no-preload-596617" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:03.598194  205527 pod_ready.go:83] waiting for pod "kube-proxy-sq84q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:03.998648  205527 pod_ready.go:94] pod "kube-proxy-sq84q" is "Ready"
	I1123 08:45:03.998683  205527 pod_ready.go:86] duration metric: took 400.460193ms for pod "kube-proxy-sq84q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:04.198303  205527 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-596617" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:04.597794  205527 pod_ready.go:94] pod "kube-scheduler-no-preload-596617" is "Ready"
	I1123 08:45:04.597822  205527 pod_ready.go:86] duration metric: took 399.49259ms for pod "kube-scheduler-no-preload-596617" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:04.597837  205527 pod_ready.go:40] duration metric: took 1.603993881s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:04.657432  205527 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 08:45:04.660624  205527 out.go:179] * Done! kubectl is now configured to use "no-preload-596617" cluster and "default" namespace by default
	W1123 08:45:04.586000  208070 node_ready.go:57] node "embed-certs-230843" has "Ready":"False" status (will retry)
	W1123 08:45:07.085884  208070 node_ready.go:57] node "embed-certs-230843" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	72dc2e979e5ad       1611cd07b61d5       6 seconds ago       Running             busybox                   0                   2df5c11b9426b       busybox                                     default
	688a87c4a6cfd       138784d87c9c5       12 seconds ago      Running             coredns                   0                   46b94f7bae627       coredns-66bc5c9577-spk2c                    kube-system
	91c445761c112       66749159455b3       12 seconds ago      Running             storage-provisioner       0                   9315df4ee20b9       storage-provisioner                         kube-system
	4ff14e6367451       b1a8c6f707935       23 seconds ago      Running             kindnet-cni               0                   fd291fa8cf12b       kindnet-68b4f                               kube-system
	38a03d8690d80       05baa95f5142d       25 seconds ago      Running             kube-proxy                0                   0fe71124eeed0       kube-proxy-sq84q                            kube-system
	ae63305653ca8       a1894772a478e       43 seconds ago      Running             etcd                      0                   0fe3b525b0fb7       etcd-no-preload-596617                      kube-system
	2e1de07c6493d       7eb2c6ff0c5a7       43 seconds ago      Running             kube-controller-manager   0                   25f22a830344d       kube-controller-manager-no-preload-596617   kube-system
	3922d3ac1a3fa       b5f57ec6b9867       43 seconds ago      Running             kube-scheduler            0                   df50c7fad9f34       kube-scheduler-no-preload-596617            kube-system
	0106e17e619c2       43911e833d64d       43 seconds ago      Running             kube-apiserver            0                   4154848c60b4a       kube-apiserver-no-preload-596617            kube-system
	
	
	==> containerd <==
	Nov 23 08:45:01 no-preload-596617 containerd[757]: time="2025-11-23T08:45:01.726304253Z" level=info msg="connecting to shim 91c445761c11225f240bc25605c50446bcaa23a89a3ee6c7f275c64941c44788" address="unix:///run/containerd/s/e9e73f3bd70fd8296c0530b6ceadaa40b81c451329c7b588036267e425341a63" protocol=ttrpc version=3
	Nov 23 08:45:01 no-preload-596617 containerd[757]: time="2025-11-23T08:45:01.761886017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-spk2c,Uid:7d69a45e-abdd-4480-8b79-7bb112b3eb7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"46b94f7bae62779e99be4f918ef19eb1e3dccdb17404b7c5e774669b02193eb4\""
	Nov 23 08:45:01 no-preload-596617 containerd[757]: time="2025-11-23T08:45:01.772237457Z" level=info msg="CreateContainer within sandbox \"46b94f7bae62779e99be4f918ef19eb1e3dccdb17404b7c5e774669b02193eb4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 23 08:45:01 no-preload-596617 containerd[757]: time="2025-11-23T08:45:01.788536099Z" level=info msg="Container 688a87c4a6cfdbcdf8876e5686af2cb559d7878123fea8977dff105b58a52002: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:45:01 no-preload-596617 containerd[757]: time="2025-11-23T08:45:01.800926941Z" level=info msg="CreateContainer within sandbox \"46b94f7bae62779e99be4f918ef19eb1e3dccdb17404b7c5e774669b02193eb4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"688a87c4a6cfdbcdf8876e5686af2cb559d7878123fea8977dff105b58a52002\""
	Nov 23 08:45:01 no-preload-596617 containerd[757]: time="2025-11-23T08:45:01.801901674Z" level=info msg="StartContainer for \"688a87c4a6cfdbcdf8876e5686af2cb559d7878123fea8977dff105b58a52002\""
	Nov 23 08:45:01 no-preload-596617 containerd[757]: time="2025-11-23T08:45:01.802839950Z" level=info msg="connecting to shim 688a87c4a6cfdbcdf8876e5686af2cb559d7878123fea8977dff105b58a52002" address="unix:///run/containerd/s/b4521a10350225922a802893d08b1e1e12eff30058b0dcfad667d18b30409d6a" protocol=ttrpc version=3
	Nov 23 08:45:01 no-preload-596617 containerd[757]: time="2025-11-23T08:45:01.885754303Z" level=info msg="StartContainer for \"91c445761c11225f240bc25605c50446bcaa23a89a3ee6c7f275c64941c44788\" returns successfully"
	Nov 23 08:45:01 no-preload-596617 containerd[757]: time="2025-11-23T08:45:01.941730021Z" level=info msg="StartContainer for \"688a87c4a6cfdbcdf8876e5686af2cb559d7878123fea8977dff105b58a52002\" returns successfully"
	Nov 23 08:45:05 no-preload-596617 containerd[757]: time="2025-11-23T08:45:05.207962028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:9b93317d-72f3-440c-9896-cb6d0b98f255,Namespace:default,Attempt:0,}"
	Nov 23 08:45:05 no-preload-596617 containerd[757]: time="2025-11-23T08:45:05.266783434Z" level=info msg="connecting to shim 2df5c11b9426b4afdee47f6090f6d6cca0d3f5ee73778e073294963eba47e889" address="unix:///run/containerd/s/3b0c63e21a15375d0fdbf13d68951d1c2de67b19ee97324c432ce465fb88e12e" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:45:05 no-preload-596617 containerd[757]: time="2025-11-23T08:45:05.339894167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:9b93317d-72f3-440c-9896-cb6d0b98f255,Namespace:default,Attempt:0,} returns sandbox id \"2df5c11b9426b4afdee47f6090f6d6cca0d3f5ee73778e073294963eba47e889\""
	Nov 23 08:45:05 no-preload-596617 containerd[757]: time="2025-11-23T08:45:05.342066264Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.300023316Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.301839790Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937187"
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.304099215Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.308076085Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 1.965966761s"
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.308290217Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.313601554Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.322804729Z" level=info msg="CreateContainer within sandbox \"2df5c11b9426b4afdee47f6090f6d6cca0d3f5ee73778e073294963eba47e889\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.337991545Z" level=info msg="Container 72dc2e979e5addf37d709e0deb3678494c8888240d86a15cd721576bfc1803bd: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.347996610Z" level=info msg="CreateContainer within sandbox \"2df5c11b9426b4afdee47f6090f6d6cca0d3f5ee73778e073294963eba47e889\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"72dc2e979e5addf37d709e0deb3678494c8888240d86a15cd721576bfc1803bd\""
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.349322461Z" level=info msg="StartContainer for \"72dc2e979e5addf37d709e0deb3678494c8888240d86a15cd721576bfc1803bd\""
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.350714954Z" level=info msg="connecting to shim 72dc2e979e5addf37d709e0deb3678494c8888240d86a15cd721576bfc1803bd" address="unix:///run/containerd/s/3b0c63e21a15375d0fdbf13d68951d1c2de67b19ee97324c432ce465fb88e12e" protocol=ttrpc version=3
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.413577321Z" level=info msg="StartContainer for \"72dc2e979e5addf37d709e0deb3678494c8888240d86a15cd721576bfc1803bd\" returns successfully"
	
	
	==> coredns [688a87c4a6cfdbcdf8876e5686af2cb559d7878123fea8977dff105b58a52002] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51486 - 2615 "HINFO IN 1421145875051200784.6464575213870468913. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004837337s
	
	
	==> describe nodes <==
	Name:               no-preload-596617
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-596617
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=no-preload-596617
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_44_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:44:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-596617
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:45:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:45:13 +0000   Sun, 23 Nov 2025 08:44:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:45:13 +0000   Sun, 23 Nov 2025 08:44:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:45:13 +0000   Sun, 23 Nov 2025 08:44:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:45:13 +0000   Sun, 23 Nov 2025 08:45:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-596617
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                6cbb2352-56dd-44f5-96aa-57c90ae6b957
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-spk2c                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-no-preload-596617                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-68b4f                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-596617             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-596617    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-sq84q                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-596617             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 24s                kube-proxy       
	  Normal   NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node no-preload-596617 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node no-preload-596617 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s (x7 over 45s)  kubelet          Node no-preload-596617 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  45s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  32s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node no-preload-596617 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node no-preload-596617 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     32s                kubelet          Node no-preload-596617 status is now: NodeHasSufficientPID
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           28s                node-controller  Node no-preload-596617 event: Registered Node no-preload-596617 in Controller
	  Normal   NodeReady                13s                kubelet          Node no-preload-596617 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015154] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.511595] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034200] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753844] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.833249] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:37] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/22/fs': -2
	
	
	==> etcd [ae63305653ca8cbbd80c13dd0f9434bfc3feedc3bbff30a329f62b0559f2895a] <==
	{"level":"warn","ts":"2025-11-23T08:44:36.634259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:36.678987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:36.765616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:36.811318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:36.870363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:36.916784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:36.969796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.033670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.071150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.132866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.166166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.225907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.263366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.297268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.328395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.353157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.416623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.444153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.490692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.504255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.537148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.567760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.622561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.653238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.821517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56888","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:45:14 up  1:27,  0 user,  load average: 4.31, 3.94, 3.22
	Linux no-preload-596617 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4ff14e63674511be9833e17757d7ac8c83cf043c373fdfaeba96b335a278376f] <==
	I1123 08:44:50.864346       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:44:50.865396       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:44:50.865567       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:44:50.865578       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:44:50.865592       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:44:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:44:51.160945       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:44:51.161137       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:44:51.161228       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:44:51.162129       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:44:51.362112       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:44:51.362241       1 metrics.go:72] Registering metrics
	I1123 08:44:51.362346       1 controller.go:711] "Syncing nftables rules"
	I1123 08:45:01.164406       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:45:01.164486       1 main.go:301] handling current node
	I1123 08:45:11.161927       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:45:11.161968       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0106e17e619c21c4f70b18b18b785e598009939262000db201255c4c23134bb6] <==
	I1123 08:44:39.263837       1 policy_source.go:240] refreshing policies
	I1123 08:44:39.311438       1 controller.go:667] quota admission added evaluator for: namespaces
	E1123 08:44:39.323488       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1123 08:44:39.372476       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:39.372740       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:44:39.414601       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:39.415614       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:44:39.477796       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:44:39.687350       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:44:39.699877       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:44:39.700083       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:44:40.943845       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:44:41.002489       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:44:41.090414       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:44:41.127424       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 08:44:41.128745       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:44:41.145314       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:44:41.150733       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:44:42.092836       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:44:42.147907       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:44:42.171337       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:44:46.685247       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:46.695371       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:46.843823       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 08:44:47.048758       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2e1de07c6493d308c8cde1bd08ad1af4bde14c9a11d6c18de05914a462d0021b] <==
	I1123 08:44:46.228539       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:44:46.228651       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 08:44:46.228685       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:44:46.229565       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 08:44:46.229836       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 08:44:46.233098       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 08:44:46.242563       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 08:44:46.244450       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:44:46.244635       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:44:46.244765       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-596617"
	I1123 08:44:46.244847       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 08:44:46.259767       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 08:44:46.313593       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:44:46.315103       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:44:46.315422       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:44:46.315855       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:44:46.315951       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:44:46.316036       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:44:46.316214       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:44:46.412608       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-596617" podCIDRs=["10.244.0.0/24"]
	I1123 08:44:46.414393       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:44:46.430115       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:44:46.430144       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:44:46.430151       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:45:01.247537       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [38a03d8690d80f6c742953b846418123550408e6b4fc3bc3ed61b8578754af02] <==
	I1123 08:44:49.072187       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:44:49.179435       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:44:49.285555       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:44:49.285588       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 08:44:49.285667       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:44:49.385594       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:44:49.385658       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:44:49.391244       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:44:49.391562       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:44:49.391575       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:44:49.393241       1 config.go:200] "Starting service config controller"
	I1123 08:44:49.393255       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:44:49.403296       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:44:49.403373       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:44:49.403395       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:44:49.403399       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:44:49.404117       1 config.go:309] "Starting node config controller"
	I1123 08:44:49.404152       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:44:49.404159       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:44:49.494235       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:44:49.503687       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:44:49.503723       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3922d3ac1a3fa33fc277f69cf60fea88cb74510306d065fff3aedfcea5e11cd5] <==
	E1123 08:44:39.210367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:44:39.210412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:44:39.210470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:44:39.210512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:44:39.210555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:44:39.210598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:44:39.210710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:44:39.210768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:44:39.210808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:44:39.210847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:44:39.211010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:44:39.211206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 08:44:40.072950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:44:40.086703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:44:40.098489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 08:44:40.146816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:44:40.249718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:44:40.320045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:44:40.320358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:44:40.353714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:44:40.365398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:44:40.388090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:44:40.452755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:44:40.473656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1123 08:44:41.881850       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: I1123 08:44:47.027220    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x55q\" (UniqueName: \"kubernetes.io/projected/a70ddc44-854e-4253-aa99-0bd199e34d0e-kube-api-access-5x55q\") pod \"kube-proxy-sq84q\" (UID: \"a70ddc44-854e-4253-aa99-0bd199e34d0e\") " pod="kube-system/kube-proxy-sq84q"
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: I1123 08:44:47.027281    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1e512ae4-2f16-4e9d-898a-51c754a6d8d7-cni-cfg\") pod \"kindnet-68b4f\" (UID: \"1e512ae4-2f16-4e9d-898a-51c754a6d8d7\") " pod="kube-system/kindnet-68b4f"
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: I1123 08:44:47.027301    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a70ddc44-854e-4253-aa99-0bd199e34d0e-lib-modules\") pod \"kube-proxy-sq84q\" (UID: \"a70ddc44-854e-4253-aa99-0bd199e34d0e\") " pod="kube-system/kube-proxy-sq84q"
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: I1123 08:44:47.027352    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e512ae4-2f16-4e9d-898a-51c754a6d8d7-lib-modules\") pod \"kindnet-68b4f\" (UID: \"1e512ae4-2f16-4e9d-898a-51c754a6d8d7\") " pod="kube-system/kindnet-68b4f"
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: I1123 08:44:47.027369    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k74k7\" (UniqueName: \"kubernetes.io/projected/1e512ae4-2f16-4e9d-898a-51c754a6d8d7-kube-api-access-k74k7\") pod \"kindnet-68b4f\" (UID: \"1e512ae4-2f16-4e9d-898a-51c754a6d8d7\") " pod="kube-system/kindnet-68b4f"
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: I1123 08:44:47.027434    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e512ae4-2f16-4e9d-898a-51c754a6d8d7-xtables-lock\") pod \"kindnet-68b4f\" (UID: \"1e512ae4-2f16-4e9d-898a-51c754a6d8d7\") " pod="kube-system/kindnet-68b4f"
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: I1123 08:44:47.027452    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a70ddc44-854e-4253-aa99-0bd199e34d0e-kube-proxy\") pod \"kube-proxy-sq84q\" (UID: \"a70ddc44-854e-4253-aa99-0bd199e34d0e\") " pod="kube-system/kube-proxy-sq84q"
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: E1123 08:44:47.403554    2122 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: E1123 08:44:47.403587    2122 projected.go:196] Error preparing data for projected volume kube-api-access-5x55q for pod kube-system/kube-proxy-sq84q: configmap "kube-root-ca.crt" not found
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: E1123 08:44:47.403689    2122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a70ddc44-854e-4253-aa99-0bd199e34d0e-kube-api-access-5x55q podName:a70ddc44-854e-4253-aa99-0bd199e34d0e nodeName:}" failed. No retries permitted until 2025-11-23 08:44:47.903664457 +0000 UTC m=+5.927624652 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5x55q" (UniqueName: "kubernetes.io/projected/a70ddc44-854e-4253-aa99-0bd199e34d0e-kube-api-access-5x55q") pod "kube-proxy-sq84q" (UID: "a70ddc44-854e-4253-aa99-0bd199e34d0e") : configmap "kube-root-ca.crt" not found
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: E1123 08:44:47.403909    2122 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: E1123 08:44:47.403921    2122 projected.go:196] Error preparing data for projected volume kube-api-access-k74k7 for pod kube-system/kindnet-68b4f: configmap "kube-root-ca.crt" not found
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: E1123 08:44:47.403963    2122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1e512ae4-2f16-4e9d-898a-51c754a6d8d7-kube-api-access-k74k7 podName:1e512ae4-2f16-4e9d-898a-51c754a6d8d7 nodeName:}" failed. No retries permitted until 2025-11-23 08:44:47.903951132 +0000 UTC m=+5.927911327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k74k7" (UniqueName: "kubernetes.io/projected/1e512ae4-2f16-4e9d-898a-51c754a6d8d7-kube-api-access-k74k7") pod "kindnet-68b4f" (UID: "1e512ae4-2f16-4e9d-898a-51c754a6d8d7") : configmap "kube-root-ca.crt" not found
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: I1123 08:44:47.953125    2122 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 08:44:51 no-preload-596617 kubelet[2122]: I1123 08:44:51.526335    2122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-68b4f" podStartSLOduration=3.494759238 podStartE2EDuration="5.526315344s" podCreationTimestamp="2025-11-23 08:44:46 +0000 UTC" firstStartedPulling="2025-11-23 08:44:48.655410264 +0000 UTC m=+6.679370451" lastFinishedPulling="2025-11-23 08:44:50.68696637 +0000 UTC m=+8.710926557" observedRunningTime="2025-11-23 08:44:51.52594592 +0000 UTC m=+9.549906123" watchObservedRunningTime="2025-11-23 08:44:51.526315344 +0000 UTC m=+9.550275539"
	Nov 23 08:44:51 no-preload-596617 kubelet[2122]: I1123 08:44:51.526455    2122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sq84q" podStartSLOduration=5.526449828 podStartE2EDuration="5.526449828s" podCreationTimestamp="2025-11-23 08:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:49.511706429 +0000 UTC m=+7.535666624" watchObservedRunningTime="2025-11-23 08:44:51.526449828 +0000 UTC m=+9.550410023"
	Nov 23 08:45:01 no-preload-596617 kubelet[2122]: I1123 08:45:01.196886    2122 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:45:01 no-preload-596617 kubelet[2122]: I1123 08:45:01.278906    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bbf4fd29-62c7-49d8-b210-930c2bd6c7b4-tmp\") pod \"storage-provisioner\" (UID: \"bbf4fd29-62c7-49d8-b210-930c2bd6c7b4\") " pod="kube-system/storage-provisioner"
	Nov 23 08:45:01 no-preload-596617 kubelet[2122]: I1123 08:45:01.279150    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx4m6\" (UniqueName: \"kubernetes.io/projected/7d69a45e-abdd-4480-8b79-7bb112b3eb7f-kube-api-access-hx4m6\") pod \"coredns-66bc5c9577-spk2c\" (UID: \"7d69a45e-abdd-4480-8b79-7bb112b3eb7f\") " pod="kube-system/coredns-66bc5c9577-spk2c"
	Nov 23 08:45:01 no-preload-596617 kubelet[2122]: I1123 08:45:01.279267    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5kf6\" (UniqueName: \"kubernetes.io/projected/bbf4fd29-62c7-49d8-b210-930c2bd6c7b4-kube-api-access-x5kf6\") pod \"storage-provisioner\" (UID: \"bbf4fd29-62c7-49d8-b210-930c2bd6c7b4\") " pod="kube-system/storage-provisioner"
	Nov 23 08:45:01 no-preload-596617 kubelet[2122]: I1123 08:45:01.279378    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d69a45e-abdd-4480-8b79-7bb112b3eb7f-config-volume\") pod \"coredns-66bc5c9577-spk2c\" (UID: \"7d69a45e-abdd-4480-8b79-7bb112b3eb7f\") " pod="kube-system/coredns-66bc5c9577-spk2c"
	Nov 23 08:45:02 no-preload-596617 kubelet[2122]: I1123 08:45:02.611815    2122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-spk2c" podStartSLOduration=15.611795453 podStartE2EDuration="15.611795453s" podCreationTimestamp="2025-11-23 08:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:02.586634267 +0000 UTC m=+20.610594471" watchObservedRunningTime="2025-11-23 08:45:02.611795453 +0000 UTC m=+20.635755640"
	Nov 23 08:45:04 no-preload-596617 kubelet[2122]: I1123 08:45:04.889166    2122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.889135369 podStartE2EDuration="15.889135369s" podCreationTimestamp="2025-11-23 08:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:02.641611474 +0000 UTC m=+20.665571669" watchObservedRunningTime="2025-11-23 08:45:04.889135369 +0000 UTC m=+22.913095564"
	Nov 23 08:45:04 no-preload-596617 kubelet[2122]: I1123 08:45:04.902542    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfk2p\" (UniqueName: \"kubernetes.io/projected/9b93317d-72f3-440c-9896-cb6d0b98f255-kube-api-access-qfk2p\") pod \"busybox\" (UID: \"9b93317d-72f3-440c-9896-cb6d0b98f255\") " pod="default/busybox"
	Nov 23 08:45:13 no-preload-596617 kubelet[2122]: E1123 08:45:13.039046    2122 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 192.168.85.2:41256->192.168.85.2:10010: read tcp 192.168.85.2:41256->192.168.85.2:10010: read: connection reset by peer
	
	
	==> storage-provisioner [91c445761c11225f240bc25605c50446bcaa23a89a3ee6c7f275c64941c44788] <==
	I1123 08:45:01.949065       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:45:02.057538       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:45:02.057595       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:45:02.060413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:02.074470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:02.074917       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:45:02.075272       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-596617_bffee8a0-c5ce-4f43-b168-186013674e96!
	I1123 08:45:02.076532       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"36ec1114-3ceb-4c05-ab16-32b7af61b9eb", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-596617_bffee8a0-c5ce-4f43-b168-186013674e96 became leader
	W1123 08:45:02.079000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:02.087962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:02.175694       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-596617_bffee8a0-c5ce-4f43-b168-186013674e96!
	W1123 08:45:04.090654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:04.095688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:06.099700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:06.104565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:08.107676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:08.115970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:10.127027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:10.132118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:12.136068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:12.143289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:14.147252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:14.152648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-596617 -n no-preload-596617
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-596617 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-596617
helpers_test.go:243: (dbg) docker inspect no-preload-596617:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4a24325fbe794cf5f60d926fae91f7d761a86b894e0fe3b550364fd00fa8535",
	        "Created": "2025-11-23T08:43:50.62986252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 205837,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:43:50.732730436Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/a4a24325fbe794cf5f60d926fae91f7d761a86b894e0fe3b550364fd00fa8535/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4a24325fbe794cf5f60d926fae91f7d761a86b894e0fe3b550364fd00fa8535/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4a24325fbe794cf5f60d926fae91f7d761a86b894e0fe3b550364fd00fa8535/hosts",
	        "LogPath": "/var/lib/docker/containers/a4a24325fbe794cf5f60d926fae91f7d761a86b894e0fe3b550364fd00fa8535/a4a24325fbe794cf5f60d926fae91f7d761a86b894e0fe3b550364fd00fa8535-json.log",
	        "Name": "/no-preload-596617",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-596617:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-596617",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a4a24325fbe794cf5f60d926fae91f7d761a86b894e0fe3b550364fd00fa8535",
	                "LowerDir": "/var/lib/docker/overlay2/3cf064fffc5f7850a69c9f83c5fdbcf4caf517683876e249afa8ec526609f9fa-init/diff:/var/lib/docker/overlay2/88c30082a717909d357f7d81c88a05ce3487a40d372ee6dc57fb9f012e0502da/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3cf064fffc5f7850a69c9f83c5fdbcf4caf517683876e249afa8ec526609f9fa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3cf064fffc5f7850a69c9f83c5fdbcf4caf517683876e249afa8ec526609f9fa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3cf064fffc5f7850a69c9f83c5fdbcf4caf517683876e249afa8ec526609f9fa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-596617",
	                "Source": "/var/lib/docker/volumes/no-preload-596617/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-596617",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-596617",
	                "name.minikube.sigs.k8s.io": "no-preload-596617",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "00b92314d34cf5857c066ae60db5365912921d8c4d66561bf3f3463cb270b201",
	            "SandboxKey": "/var/run/docker/netns/00b92314d34c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-596617": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:19:98:c6:9b:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "51e4af3ac2a76ea2ea64d1c486af05de7ac03b53a1cfb84aeab01a138e31c84c",
	                    "EndpointID": "b3ad4551d70f7ca491a8bcfac13f2b9967037a34fb51e45b38d9dd8afc1ceaf8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-596617",
	                        "a4a24325fbe7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-596617 -n no-preload-596617
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-596617 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-596617 logs -n 25: (1.230624081s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-440243 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ ssh     │ -p cilium-440243 sudo crio config                                                                                                                                                                                                                   │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │                     │
	│ delete  │ -p cilium-440243                                                                                                                                                                                                                                    │ cilium-440243            │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │ 23 Nov 25 08:39 UTC │
	│ start   │ -p cert-expiration-119748 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-119748   │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │ 23 Nov 25 08:40 UTC │
	│ ssh     │ force-systemd-env-760522 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-760522 │ jenkins │ v1.37.0 │ 23 Nov 25 08:40 UTC │ 23 Nov 25 08:40 UTC │
	│ delete  │ -p force-systemd-env-760522                                                                                                                                                                                                                         │ force-systemd-env-760522 │ jenkins │ v1.37.0 │ 23 Nov 25 08:40 UTC │ 23 Nov 25 08:40 UTC │
	│ start   │ -p cert-options-106536 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:40 UTC │ 23 Nov 25 08:41 UTC │
	│ ssh     │ cert-options-106536 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ ssh     │ -p cert-options-106536 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ delete  │ -p cert-options-106536                                                                                                                                                                                                                              │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ start   │ -p old-k8s-version-180638 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:42 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-180638 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ stop    │ -p old-k8s-version-180638 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-180638 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ start   │ -p old-k8s-version-180638 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:43 UTC │
	│ start   │ -p cert-expiration-119748 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-119748   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p cert-expiration-119748                                                                                                                                                                                                                           │ cert-expiration-119748   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ start   │ -p no-preload-596617 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-596617        │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ old-k8s-version-180638 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ pause   │ -p old-k8s-version-180638 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ unpause │ -p old-k8s-version-180638 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p old-k8s-version-180638                                                                                                                                                                                                                           │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p old-k8s-version-180638                                                                                                                                                                                                                           │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ start   │ -p embed-certs-230843 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-230843       │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:43:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:43:58.820393  208070 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:43:58.820553  208070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:43:58.820559  208070 out.go:374] Setting ErrFile to fd 2...
	I1123 08:43:58.820564  208070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:43:58.820832  208070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
	I1123 08:43:58.821280  208070 out.go:368] Setting JSON to false
	I1123 08:43:58.822319  208070 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5188,"bootTime":1763882251,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 08:43:58.822477  208070 start.go:143] virtualization:  
	I1123 08:43:58.828335  208070 out.go:179] * [embed-certs-230843] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:43:58.831686  208070 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:43:58.831751  208070 notify.go:221] Checking for updates...
	I1123 08:43:58.838329  208070 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:43:58.841475  208070 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:43:58.844958  208070 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube
	I1123 08:43:58.848024  208070 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:43:58.850992  208070 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:43:58.854518  208070 config.go:182] Loaded profile config "no-preload-596617": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:43:58.854632  208070 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:43:58.895036  208070 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:43:58.895154  208070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:43:59.011215  208070 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-23 08:43:59.000339025 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:43:59.011336  208070 docker.go:319] overlay module found
	I1123 08:43:59.014535  208070 out.go:179] * Using the docker driver based on user configuration
	I1123 08:43:59.017573  208070 start.go:309] selected driver: docker
	I1123 08:43:59.017601  208070 start.go:927] validating driver "docker" against <nil>
	I1123 08:43:59.017627  208070 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:43:59.018307  208070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:43:59.147104  208070 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-23 08:43:59.137186105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:43:59.147255  208070 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:43:59.147482  208070 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:43:59.150526  208070 out.go:179] * Using Docker driver with root privileges
	I1123 08:43:59.153370  208070 cni.go:84] Creating CNI manager for ""
	I1123 08:43:59.153454  208070 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:43:59.153468  208070 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:43:59.153541  208070 start.go:353] cluster config:
	{Name:embed-certs-230843 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-230843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:43:59.156589  208070 out.go:179] * Starting "embed-certs-230843" primary control-plane node in "embed-certs-230843" cluster
	I1123 08:43:59.159437  208070 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:43:59.162459  208070 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:43:59.165295  208070 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:43:59.165347  208070 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1123 08:43:59.165357  208070 cache.go:65] Caching tarball of preloaded images
	I1123 08:43:59.165392  208070 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:43:59.165531  208070 preload.go:238] Found /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1123 08:43:59.165541  208070 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 08:43:59.165651  208070 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/config.json ...
	I1123 08:43:59.165677  208070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/config.json: {Name:mk4d6baf73ed74f8398c7a685c69000ceb39bedf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:59.194302  208070 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:43:59.194327  208070 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:43:59.194343  208070 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:43:59.194371  208070 start.go:360] acquireMachinesLock for embed-certs-230843: {Name:mk7c64cffb325c304ae7da75fe620432eaf24373 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:59.194477  208070 start.go:364] duration metric: took 86.975µs to acquireMachinesLock for "embed-certs-230843"
	I1123 08:43:59.194508  208070 start.go:93] Provisioning new machine with config: &{Name:embed-certs-230843 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-230843 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:43:59.194586  208070 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:43:58.436437  205527 cli_runner.go:164] Run: docker network inspect no-preload-596617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:43:58.456601  205527 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:43:58.461020  205527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:43:58.474078  205527 kubeadm.go:884] updating cluster {Name:no-preload-596617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-596617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:43:58.474186  205527 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:43:58.474240  205527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:43:58.502532  205527 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1123 08:43:58.502558  205527 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1123 08:43:58.502636  205527 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:58.502869  205527 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:58.503526  205527 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:58.503878  205527 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:58.504625  205527 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:58.505640  205527 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:58.505898  205527 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:58.505908  205527 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1123 08:43:58.507486  205527 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:58.509051  205527 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:58.510092  205527 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:58.510407  205527 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:58.510531  205527 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:58.510711  205527 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:58.510754  205527 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:58.510092  205527 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1123 08:43:58.731935  205527 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0"
	I1123 08:43:58.732012  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:58.748605  205527 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1123 08:43:58.748755  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1123 08:43:58.750486  205527 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e"
	I1123 08:43:58.750548  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:58.757704  205527 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1123 08:43:58.757742  205527 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:58.757787  205527 ssh_runner.go:195] Run: which crictl
	I1123 08:43:58.766799  205527 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc"
	I1123 08:43:58.766862  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:58.770747  205527 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196"
	I1123 08:43:58.770812  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:58.771443  205527 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9"
	I1123 08:43:58.771484  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:58.818696  205527 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a"
	I1123 08:43:58.818766  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:58.824233  205527 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1123 08:43:58.824271  205527 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1123 08:43:58.824318  205527 ssh_runner.go:195] Run: which crictl
	I1123 08:43:58.824397  205527 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1123 08:43:58.824412  205527 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:58.824435  205527 ssh_runner.go:195] Run: which crictl
	I1123 08:43:58.824508  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:58.869154  205527 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1123 08:43:58.869194  205527 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:58.869243  205527 ssh_runner.go:195] Run: which crictl
	I1123 08:43:58.869295  205527 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1123 08:43:58.869307  205527 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:58.869327  205527 ssh_runner.go:195] Run: which crictl
	I1123 08:43:58.880104  205527 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1123 08:43:58.880157  205527 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:58.880213  205527 ssh_runner.go:195] Run: which crictl
	I1123 08:43:58.923474  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:58.923527  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:43:58.923639  205527 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1123 08:43:58.923668  205527 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:58.923698  205527 ssh_runner.go:195] Run: which crictl
	I1123 08:43:58.934601  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:58.934660  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:58.934703  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:58.934738  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:58.934838  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:59.100632  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:43:59.100844  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:59.100992  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:59.127697  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:59.127840  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:59.128054  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:59.128147  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:59.300226  205527 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1123 08:43:59.300325  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1123 08:43:59.300404  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:43:59.300460  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:59.329327  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:59.329499  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:59.329568  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:59.329620  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:59.396955  205527 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1123 08:43:59.397058  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:43:59.397120  205527 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1123 08:43:59.397138  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1123 08:43:59.397178  205527 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1123 08:43:59.397224  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1123 08:43:59.197893  208070 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:43:59.198118  208070 start.go:159] libmachine.API.Create for "embed-certs-230843" (driver="docker")
	I1123 08:43:59.198157  208070 client.go:173] LocalClient.Create starting
	I1123 08:43:59.198277  208070 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem
	I1123 08:43:59.198319  208070 main.go:143] libmachine: Decoding PEM data...
	I1123 08:43:59.198342  208070 main.go:143] libmachine: Parsing certificate...
	I1123 08:43:59.198395  208070 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem
	I1123 08:43:59.198417  208070 main.go:143] libmachine: Decoding PEM data...
	I1123 08:43:59.198433  208070 main.go:143] libmachine: Parsing certificate...
	I1123 08:43:59.198803  208070 cli_runner.go:164] Run: docker network inspect embed-certs-230843 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:43:59.215653  208070 cli_runner.go:211] docker network inspect embed-certs-230843 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:43:59.215745  208070 network_create.go:284] running [docker network inspect embed-certs-230843] to gather additional debugging logs...
	I1123 08:43:59.215762  208070 cli_runner.go:164] Run: docker network inspect embed-certs-230843
	W1123 08:43:59.234026  208070 cli_runner.go:211] docker network inspect embed-certs-230843 returned with exit code 1
	I1123 08:43:59.234055  208070 network_create.go:287] error running [docker network inspect embed-certs-230843]: docker network inspect embed-certs-230843: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-230843 not found
	I1123 08:43:59.234070  208070 network_create.go:289] output of [docker network inspect embed-certs-230843]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-230843 not found
	
	** /stderr **
	I1123 08:43:59.234159  208070 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:43:59.254328  208070 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a946cc9c0edf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:ea:52:17:a9:7a} reservation:<nil>}
	I1123 08:43:59.254644  208070 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fb33daef15c9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:08:1d:d1:c6:df} reservation:<nil>}
	I1123 08:43:59.254975  208070 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb61edac6088 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e6:64:59:e2:c3:5a} reservation:<nil>}
	I1123 08:43:59.255396  208070 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ccbb0}
	I1123 08:43:59.255414  208070 network_create.go:124] attempt to create docker network embed-certs-230843 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 08:43:59.255470  208070 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-230843 embed-certs-230843
	I1123 08:43:59.334308  208070 network_create.go:108] docker network embed-certs-230843 192.168.76.0/24 created
	I1123 08:43:59.334339  208070 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-230843" container
	I1123 08:43:59.334426  208070 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:43:59.359885  208070 cli_runner.go:164] Run: docker volume create embed-certs-230843 --label name.minikube.sigs.k8s.io=embed-certs-230843 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:43:59.382840  208070 oci.go:103] Successfully created a docker volume embed-certs-230843
	I1123 08:43:59.382935  208070 cli_runner.go:164] Run: docker run --rm --name embed-certs-230843-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-230843 --entrypoint /usr/bin/test -v embed-certs-230843:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:44:00.279925  208070 oci.go:107] Successfully prepared a docker volume embed-certs-230843
	I1123 08:44:00.280008  208070 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:44:00.280024  208070 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:44:00.280111  208070 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-230843:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 08:43:59.517536  205527 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1123 08:43:59.517637  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:43:59.517695  205527 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1123 08:43:59.517743  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:43:59.517788  205527 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1123 08:43:59.517833  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:43:59.517878  205527 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1123 08:43:59.517921  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:43:59.517967  205527 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1123 08:43:59.517983  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1123 08:43:59.518020  205527 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1123 08:43:59.518033  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1123 08:43:59.583417  205527 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1123 08:43:59.583452  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1123 08:43:59.583493  205527 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1123 08:43:59.583504  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1123 08:43:59.583532  205527 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1123 08:43:59.583551  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1123 08:43:59.583579  205527 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1123 08:43:59.583592  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1123 08:43:59.700483  205527 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1123 08:43:59.701931  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1123 08:43:59.968550  205527 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1123 08:43:59.968686  205527 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1123 08:43:59.968748  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:00.112305  205527 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1123 08:44:00.181960  205527 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1123 08:44:00.182069  205527 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:00.182314  205527 ssh_runner.go:195] Run: which crictl
	I1123 08:44:00.273861  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:00.440733  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:00.471719  205527 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1123 08:44:00.471798  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1123 08:44:00.584201  205527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:02.492982  205527 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.021158278s)
	I1123 08:44:02.493052  205527 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1123 08:44:02.493087  205527 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:44:02.493164  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:44:02.493253  205527 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.909026288s)
	I1123 08:44:02.493314  205527 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1123 08:44:02.493464  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1123 08:44:04.398900  205527 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.90539484s)
	I1123 08:44:04.398935  205527 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1123 08:44:04.398961  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1123 08:44:04.399012  205527 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.905817918s)
	I1123 08:44:04.399026  205527 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1123 08:44:04.399052  205527 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:44:04.399096  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:44:06.167290  208070 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-230843:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.887129254s)
	I1123 08:44:06.167318  208070 kic.go:203] duration metric: took 5.887290897s to extract preloaded images to volume ...
	W1123 08:44:06.167453  208070 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:44:06.167554  208070 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:44:06.252841  208070 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-230843 --name embed-certs-230843 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-230843 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-230843 --network embed-certs-230843 --ip 192.168.76.2 --volume embed-certs-230843:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:44:06.697562  208070 cli_runner.go:164] Run: docker container inspect embed-certs-230843 --format={{.State.Running}}
	I1123 08:44:06.726066  208070 cli_runner.go:164] Run: docker container inspect embed-certs-230843 --format={{.State.Status}}
	I1123 08:44:06.754105  208070 cli_runner.go:164] Run: docker exec embed-certs-230843 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:44:06.820360  208070 oci.go:144] the created container "embed-certs-230843" has a running status.
	I1123 08:44:06.820392  208070 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-2339/.minikube/machines/embed-certs-230843/id_rsa...
	I1123 08:44:07.367947  208070 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-2339/.minikube/machines/embed-certs-230843/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:44:07.405852  208070 cli_runner.go:164] Run: docker container inspect embed-certs-230843 --format={{.State.Status}}
	I1123 08:44:07.439306  208070 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:44:07.439326  208070 kic_runner.go:114] Args: [docker exec --privileged embed-certs-230843 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:44:07.546025  208070 cli_runner.go:164] Run: docker container inspect embed-certs-230843 --format={{.State.Status}}
	I1123 08:44:07.572710  208070 machine.go:94] provisionDockerMachine start ...
	I1123 08:44:07.572805  208070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-230843
	I1123 08:44:07.602226  208070 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:07.602552  208070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1123 08:44:07.602562  208070 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:44:07.603380  208070 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51518->127.0.0.1:33068: read: connection reset by peer
	I1123 08:44:07.080178  205527 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1: (2.681055363s)
	I1123 08:44:07.080202  205527 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1123 08:44:07.080220  205527 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:44:07.080264  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:44:08.315422  205527 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.235136614s)
	I1123 08:44:08.315452  205527 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1123 08:44:08.315470  205527 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:44:08.315515  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:44:10.769814  208070 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-230843
	
	I1123 08:44:10.769904  208070 ubuntu.go:182] provisioning hostname "embed-certs-230843"
	I1123 08:44:10.769999  208070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-230843
	I1123 08:44:10.791161  208070 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:10.791487  208070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1123 08:44:10.791503  208070 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-230843 && echo "embed-certs-230843" | sudo tee /etc/hostname
	I1123 08:44:10.963401  208070 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-230843
	
	I1123 08:44:10.963550  208070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-230843
	I1123 08:44:10.985988  208070 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:10.986321  208070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1123 08:44:10.986337  208070 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-230843' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-230843/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-230843' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:44:11.154222  208070 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:44:11.154254  208070 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-2339/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-2339/.minikube}
	I1123 08:44:11.154285  208070 ubuntu.go:190] setting up certificates
	I1123 08:44:11.154294  208070 provision.go:84] configureAuth start
	I1123 08:44:11.154355  208070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-230843
	I1123 08:44:11.180464  208070 provision.go:143] copyHostCerts
	I1123 08:44:11.180527  208070 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem, removing ...
	I1123 08:44:11.180536  208070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem
	I1123 08:44:11.180608  208070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem (1078 bytes)
	I1123 08:44:11.180708  208070 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem, removing ...
	I1123 08:44:11.180714  208070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem
	I1123 08:44:11.180779  208070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem (1123 bytes)
	I1123 08:44:11.180880  208070 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem, removing ...
	I1123 08:44:11.180890  208070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem
	I1123 08:44:11.180927  208070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem (1675 bytes)
	I1123 08:44:11.180985  208070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem org=jenkins.embed-certs-230843 san=[127.0.0.1 192.168.76.2 embed-certs-230843 localhost minikube]
	I1123 08:44:11.380799  208070 provision.go:177] copyRemoteCerts
	I1123 08:44:11.380857  208070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:44:11.380909  208070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-230843
	I1123 08:44:11.397936  208070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/embed-certs-230843/id_rsa Username:docker}
	I1123 08:44:11.513153  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:44:11.535766  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:44:11.555866  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 08:44:11.578173  208070 provision.go:87] duration metric: took 423.857195ms to configureAuth
	I1123 08:44:11.578201  208070 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:44:11.578383  208070 config.go:182] Loaded profile config "embed-certs-230843": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:44:11.578397  208070 machine.go:97] duration metric: took 4.005668909s to provisionDockerMachine
	I1123 08:44:11.578406  208070 client.go:176] duration metric: took 12.380236941s to LocalClient.Create
	I1123 08:44:11.578420  208070 start.go:167] duration metric: took 12.3803031s to libmachine.API.Create "embed-certs-230843"
	I1123 08:44:11.578432  208070 start.go:293] postStartSetup for "embed-certs-230843" (driver="docker")
	I1123 08:44:11.578441  208070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:44:11.578492  208070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:44:11.578532  208070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-230843
	I1123 08:44:11.598372  208070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/embed-certs-230843/id_rsa Username:docker}
	I1123 08:44:11.710512  208070 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:44:11.714324  208070 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:44:11.714355  208070 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:44:11.714366  208070 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-2339/.minikube/addons for local assets ...
	I1123 08:44:11.714420  208070 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-2339/.minikube/files for local assets ...
	I1123 08:44:11.714501  208070 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem -> 41512.pem in /etc/ssl/certs
	I1123 08:44:11.714610  208070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:44:11.723788  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem --> /etc/ssl/certs/41512.pem (1708 bytes)
	I1123 08:44:11.745314  208070 start.go:296] duration metric: took 166.868593ms for postStartSetup
	I1123 08:44:11.745692  208070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-230843
	I1123 08:44:11.767363  208070 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/config.json ...
	I1123 08:44:11.767699  208070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:44:11.767784  208070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-230843
	I1123 08:44:11.791138  208070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/embed-certs-230843/id_rsa Username:docker}
	I1123 08:44:11.902467  208070 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:44:11.908289  208070 start.go:128] duration metric: took 12.713686598s to createHost
	I1123 08:44:11.908320  208070 start.go:83] releasing machines lock for "embed-certs-230843", held for 12.713824618s
	I1123 08:44:11.908397  208070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-230843
	I1123 08:44:11.927081  208070 ssh_runner.go:195] Run: cat /version.json
	I1123 08:44:11.927166  208070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-230843
	I1123 08:44:11.927332  208070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:44:11.927444  208070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-230843
	I1123 08:44:11.962785  208070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/embed-certs-230843/id_rsa Username:docker}
	I1123 08:44:11.969210  208070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/embed-certs-230843/id_rsa Username:docker}
	I1123 08:44:12.077248  208070 ssh_runner.go:195] Run: systemctl --version
	I1123 08:44:12.174947  208070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:44:12.181170  208070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:44:12.181252  208070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:44:12.214289  208070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:44:12.214352  208070 start.go:496] detecting cgroup driver to use...
	I1123 08:44:12.214401  208070 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:44:12.214479  208070 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:44:12.230990  208070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:44:12.246483  208070 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:44:12.246552  208070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:44:12.265768  208070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:44:12.286028  208070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:44:12.437707  208070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:44:12.599250  208070 docker.go:234] disabling docker service ...
	I1123 08:44:12.599316  208070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:44:12.626144  208070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:44:12.640088  208070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:44:12.794516  208070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:44:12.945619  208070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:44:12.960745  208070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:44:12.980037  208070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 08:44:12.990289  208070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:44:13.000153  208070 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 08:44:13.000290  208070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 08:44:13.011343  208070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:44:13.021540  208070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:44:13.031323  208070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:44:13.041138  208070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:44:13.050121  208070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:44:13.059766  208070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:44:13.069313  208070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:44:13.079315  208070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:44:13.088091  208070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:44:13.096340  208070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:13.254505  208070 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:44:13.432840  208070 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:44:13.432964  208070 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:44:13.437268  208070 start.go:564] Will wait 60s for crictl version
	I1123 08:44:13.437380  208070 ssh_runner.go:195] Run: which crictl
	I1123 08:44:13.447283  208070 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:44:13.496735  208070 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:44:13.496850  208070 ssh_runner.go:195] Run: containerd --version
	I1123 08:44:13.518598  208070 ssh_runner.go:195] Run: containerd --version
	I1123 08:44:13.547197  208070 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 08:44:13.550219  208070 cli_runner.go:164] Run: docker network inspect embed-certs-230843 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:44:13.569816  208070 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 08:44:13.573664  208070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:44:13.583749  208070 kubeadm.go:884] updating cluster {Name:embed-certs-230843 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-230843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:44:13.583869  208070 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:44:13.583940  208070 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:44:13.617559  208070 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:44:13.617582  208070 containerd.go:534] Images already preloaded, skipping extraction
	I1123 08:44:13.617646  208070 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:44:13.655825  208070 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:44:13.655846  208070 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:44:13.655853  208070 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1123 08:44:13.655954  208070 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-230843 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-230843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:44:13.656015  208070 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:44:13.687279  208070 cni.go:84] Creating CNI manager for ""
	I1123 08:44:13.687302  208070 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:44:13.687349  208070 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:44:13.687371  208070 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-230843 NodeName:embed-certs-230843 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:44:13.687487  208070 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-230843"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:44:13.687556  208070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:44:13.695772  208070 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:44:13.695844  208070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:44:13.703947  208070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1123 08:44:13.716967  208070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:44:13.729959  208070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1123 08:44:13.742910  208070 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:44:13.746652  208070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:44:13.756101  208070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:09.621354  205527 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.305801584s)
	I1123 08:44:09.621382  205527 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1123 08:44:09.621401  205527 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:44:09.621472  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:44:13.370517  205527 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (3.749015759s)
	I1123 08:44:13.370541  205527 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1123 08:44:13.370558  205527 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1123 08:44:13.370611  205527 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1123 08:44:13.906030  205527 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1123 08:44:13.906062  205527 cache_images.go:125] Successfully loaded all cached images
	I1123 08:44:13.906067  205527 cache_images.go:94] duration metric: took 15.403497458s to LoadCachedImages
	I1123 08:44:13.906078  205527 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1123 08:44:13.906170  205527 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-596617 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-596617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:44:13.906242  205527 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:44:13.946984  205527 cni.go:84] Creating CNI manager for ""
	I1123 08:44:13.947007  205527 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:44:13.947020  205527 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:44:13.947052  205527 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-596617 NodeName:no-preload-596617 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:44:13.947161  205527 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-596617"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:44:13.947226  205527 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:44:13.967989  205527 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1123 08:44:13.968052  205527 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1123 08:44:13.994704  205527 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1123 08:44:13.994806  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1123 08:44:13.995443  205527 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1123 08:44:13.996813  205527 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1123 08:44:14.000863  205527 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1123 08:44:14.000908  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1123 08:44:13.910292  208070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:44:13.934693  208070 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843 for IP: 192.168.76.2
	I1123 08:44:13.934731  208070 certs.go:195] generating shared ca certs ...
	I1123 08:44:13.934748  208070 certs.go:227] acquiring lock for ca certs: {Name:mke0fc62f41acbef5eb3e84af3a3b8f9858bd1fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:13.934926  208070 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-2339/.minikube/ca.key
	I1123 08:44:13.934990  208070 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.key
	I1123 08:44:13.935003  208070 certs.go:257] generating profile certs ...
	I1123 08:44:13.935076  208070 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/client.key
	I1123 08:44:13.935097  208070 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/client.crt with IP's: []
	I1123 08:44:14.136806  208070 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/client.crt ...
	I1123 08:44:14.136886  208070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/client.crt: {Name:mk7188df987ff6201384ec199772dd4ba2c8d80e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:14.137128  208070 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/client.key ...
	I1123 08:44:14.137160  208070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/client.key: {Name:mk256112ada63c93b50ab366f3ed122fe54cce84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:14.138908  208070 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.key.1bb9a82d
	I1123 08:44:14.138983  208070 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.crt.1bb9a82d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 08:44:14.352305  208070 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.crt.1bb9a82d ...
	I1123 08:44:14.352386  208070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.crt.1bb9a82d: {Name:mke4d3bc434cd23a88cb8e2b92d52db45b473ee4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:14.353262  208070 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.key.1bb9a82d ...
	I1123 08:44:14.353319  208070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.key.1bb9a82d: {Name:mka627bf1e131bf980036887cf099a66b966a4a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:14.353529  208070 certs.go:382] copying /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.crt.1bb9a82d -> /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.crt
	I1123 08:44:14.353727  208070 certs.go:386] copying /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.key.1bb9a82d -> /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.key
	I1123 08:44:14.353838  208070 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/proxy-client.key
	I1123 08:44:14.353863  208070 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/proxy-client.crt with IP's: []
	I1123 08:44:14.949563  208070 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/proxy-client.crt ...
	I1123 08:44:14.949638  208070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/proxy-client.crt: {Name:mkf579b75d4f60dc245cbcfdbb33a19b5632d08e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:14.950542  208070 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/proxy-client.key ...
	I1123 08:44:14.950629  208070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/proxy-client.key: {Name:mk8b6c7d7f8460c1be9624ae5aac0c3d889446c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:14.950874  208070 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151.pem (1338 bytes)
	W1123 08:44:14.950947  208070 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151_empty.pem, impossibly tiny 0 bytes
	I1123 08:44:14.950972  208070 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 08:44:14.951059  208070 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:44:14.951122  208070 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:44:14.951172  208070 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem (1675 bytes)
	I1123 08:44:14.951262  208070 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem (1708 bytes)
	I1123 08:44:14.951932  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:44:14.972816  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:44:14.993691  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:44:15.018483  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:44:15.043505  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 08:44:15.067215  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:44:15.093679  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:44:15.125792  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/embed-certs-230843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:44:15.166087  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151.pem --> /usr/share/ca-certificates/4151.pem (1338 bytes)
	I1123 08:44:15.211096  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem --> /usr/share/ca-certificates/41512.pem (1708 bytes)
	I1123 08:44:15.257002  208070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:44:15.300930  208070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:44:15.322110  208070 ssh_runner.go:195] Run: openssl version
	I1123 08:44:15.332064  208070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4151.pem && ln -fs /usr/share/ca-certificates/4151.pem /etc/ssl/certs/4151.pem"
	I1123 08:44:15.342057  208070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4151.pem
	I1123 08:44:15.348824  208070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:02 /usr/share/ca-certificates/4151.pem
	I1123 08:44:15.348888  208070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4151.pem
	I1123 08:44:15.463707  208070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4151.pem /etc/ssl/certs/51391683.0"
	I1123 08:44:15.508947  208070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41512.pem && ln -fs /usr/share/ca-certificates/41512.pem /etc/ssl/certs/41512.pem"
	I1123 08:44:15.532500  208070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41512.pem
	I1123 08:44:15.551114  208070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:02 /usr/share/ca-certificates/41512.pem
	I1123 08:44:15.551184  208070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41512.pem
	I1123 08:44:15.676529  208070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41512.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:44:15.693134  208070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:44:15.709245  208070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:15.716248  208070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:15.716309  208070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:15.809343  208070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:44:15.821220  208070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:44:15.826868  208070 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:44:15.826919  208070 kubeadm.go:401] StartCluster: {Name:embed-certs-230843 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-230843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:15.826989  208070 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:44:15.827057  208070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:44:15.865090  208070 cri.go:89] found id: ""
	I1123 08:44:15.865202  208070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:44:15.878642  208070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:44:15.888405  208070 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:44:15.888466  208070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:44:15.900396  208070 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:44:15.900464  208070 kubeadm.go:158] found existing configuration files:
	
	I1123 08:44:15.900542  208070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:44:15.910826  208070 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:44:15.910891  208070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:44:15.920058  208070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:44:15.929123  208070 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:44:15.929185  208070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:44:15.938477  208070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:44:15.947791  208070 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:44:15.947908  208070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:44:15.955219  208070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:44:15.968045  208070 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:44:15.968186  208070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:44:15.983401  208070 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:44:16.045836  208070 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:44:16.045982  208070 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:44:16.083873  208070 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:44:16.083984  208070 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:44:16.084048  208070 kubeadm.go:319] OS: Linux
	I1123 08:44:16.084115  208070 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:44:16.084193  208070 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:44:16.084307  208070 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:44:16.084395  208070 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:44:16.084474  208070 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:44:16.084581  208070 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:44:16.084658  208070 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:44:16.084743  208070 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:44:16.084822  208070 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:44:16.217836  208070 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:44:16.218007  208070 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:44:16.218129  208070 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:44:16.225763  208070 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:44:15.145083  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1123 08:44:15.154050  205527 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1123 08:44:15.154143  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1123 08:44:15.227311  205527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:44:15.270427  205527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1123 08:44:15.286455  205527 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1123 08:44:15.286502  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1123 08:44:15.857789  205527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:44:15.870363  205527 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1123 08:44:15.885356  205527 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:44:15.900816  205527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1123 08:44:15.915802  205527 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:44:15.920804  205527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:44:15.932468  205527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:16.082663  205527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:44:16.106056  205527 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617 for IP: 192.168.85.2
	I1123 08:44:16.106077  205527 certs.go:195] generating shared ca certs ...
	I1123 08:44:16.106094  205527 certs.go:227] acquiring lock for ca certs: {Name:mke0fc62f41acbef5eb3e84af3a3b8f9858bd1fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:16.106239  205527 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-2339/.minikube/ca.key
	I1123 08:44:16.106285  205527 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.key
	I1123 08:44:16.106300  205527 certs.go:257] generating profile certs ...
	I1123 08:44:16.106391  205527 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.key
	I1123 08:44:16.106409  205527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.crt with IP's: []
	I1123 08:44:16.453703  205527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.crt ...
	I1123 08:44:16.453736  205527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.crt: {Name:mk8a5c1b998580c1ce82ec5015c51174aefa7b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:16.454596  205527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.key ...
	I1123 08:44:16.454615  205527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.key: {Name:mkdb069435a55c094282563843318c6e40257347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:16.454733  205527 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.key.5887770e
	I1123 08:44:16.454753  205527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.crt.5887770e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 08:44:16.667725  205527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.crt.5887770e ...
	I1123 08:44:16.667757  205527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.crt.5887770e: {Name:mk402568c6ad009d91b37158736ab0794a8a3e8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:16.668587  205527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.key.5887770e ...
	I1123 08:44:16.668606  205527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.key.5887770e: {Name:mk77785d4cce0ec787eff9ba26527cdbbd934787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:16.668709  205527 certs.go:382] copying /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.crt.5887770e -> /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.crt
	I1123 08:44:16.668792  205527 certs.go:386] copying /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.key.5887770e -> /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.key
	I1123 08:44:16.668856  205527 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.key
	I1123 08:44:16.668879  205527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.crt with IP's: []
	I1123 08:44:16.804060  205527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.crt ...
	I1123 08:44:16.804094  205527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.crt: {Name:mka01e33d777b7c726a0d4f8a624a970b79b1d75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:16.804298  205527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.key ...
	I1123 08:44:16.804313  205527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.key: {Name:mk0f45357a9a0407ad0917e71f5321738dd0f7d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:16.804515  205527 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151.pem (1338 bytes)
	W1123 08:44:16.804565  205527 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151_empty.pem, impossibly tiny 0 bytes
	I1123 08:44:16.804579  205527 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 08:44:16.804607  205527 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:44:16.804636  205527 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:44:16.804665  205527 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem (1675 bytes)
	I1123 08:44:16.804716  205527 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem (1708 bytes)
	I1123 08:44:16.805271  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:44:16.828372  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:44:16.860169  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:44:16.882374  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:44:16.901162  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 08:44:16.925754  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 08:44:16.951923  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:44:16.972100  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 08:44:16.992716  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151.pem --> /usr/share/ca-certificates/4151.pem (1338 bytes)
	I1123 08:44:17.014356  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem --> /usr/share/ca-certificates/41512.pem (1708 bytes)
	I1123 08:44:17.034554  205527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:44:17.054514  205527 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:44:17.069524  205527 ssh_runner.go:195] Run: openssl version
	I1123 08:44:17.076272  205527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:44:17.085550  205527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:17.089846  205527 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:17.089915  205527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:17.139644  205527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:44:17.152256  205527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4151.pem && ln -fs /usr/share/ca-certificates/4151.pem /etc/ssl/certs/4151.pem"
	I1123 08:44:17.164353  205527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4151.pem
	I1123 08:44:17.169138  205527 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:02 /usr/share/ca-certificates/4151.pem
	I1123 08:44:17.169211  205527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4151.pem
	I1123 08:44:17.211085  205527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4151.pem /etc/ssl/certs/51391683.0"
	I1123 08:44:17.231011  205527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41512.pem && ln -fs /usr/share/ca-certificates/41512.pem /etc/ssl/certs/41512.pem"
	I1123 08:44:17.240127  205527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41512.pem
	I1123 08:44:17.244326  205527 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:02 /usr/share/ca-certificates/41512.pem
	I1123 08:44:17.244390  205527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41512.pem
	I1123 08:44:17.285962  205527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41512.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:44:17.295061  205527 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:44:17.299351  205527 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:44:17.299404  205527 kubeadm.go:401] StartCluster: {Name:no-preload-596617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-596617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:17.299478  205527 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:44:17.299541  205527 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:44:17.327111  205527 cri.go:89] found id: ""
	I1123 08:44:17.327180  205527 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:44:17.336985  205527 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:44:17.345230  205527 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:44:17.345293  205527 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:44:17.355914  205527 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:44:17.355945  205527 kubeadm.go:158] found existing configuration files:
	
	I1123 08:44:17.355995  205527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:44:17.364986  205527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:44:17.365055  205527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:44:17.373010  205527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:44:17.382693  205527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:44:17.382756  205527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:44:17.390667  205527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:44:17.399297  205527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:44:17.399358  205527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:44:17.407259  205527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:44:17.415994  205527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:44:17.416056  205527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:44:17.424188  205527 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:44:17.470697  205527 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:44:17.471011  205527 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:44:17.497824  205527 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:44:17.497903  205527 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:44:17.497943  205527 kubeadm.go:319] OS: Linux
	I1123 08:44:17.497999  205527 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:44:17.498061  205527 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:44:17.498112  205527 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:44:17.498164  205527 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:44:17.498216  205527 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:44:17.498267  205527 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:44:17.498316  205527 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:44:17.498367  205527 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:44:17.498417  205527 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:44:17.598664  205527 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:44:17.598780  205527 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:44:17.598882  205527 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:44:17.621577  205527 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:44:16.232062  208070 out.go:252]   - Generating certificates and keys ...
	I1123 08:44:16.232252  208070 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:44:16.232375  208070 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:44:17.034972  208070 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:44:17.212443  208070 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:44:17.755959  208070 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:44:18.282388  208070 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:44:17.627775  205527 out.go:252]   - Generating certificates and keys ...
	I1123 08:44:17.627879  205527 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:44:17.627951  205527 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:44:18.901944  205527 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:44:19.242527  205527 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:44:19.382571  205527 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:44:19.745763  208070 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:44:19.754517  208070 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-230843 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:44:20.657812  208070 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:44:20.657949  208070 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-230843 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:44:20.966681  208070 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:44:21.513767  208070 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:44:21.853792  208070 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:44:21.853866  208070 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:44:22.469058  208070 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:44:22.850459  208070 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:44:19.791818  205527 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:44:21.281507  205527 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:44:21.282125  205527 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-596617] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:44:21.482697  205527 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:44:21.483325  205527 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-596617] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:44:21.700162  205527 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:44:23.765540  205527 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:44:24.388665  208070 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:44:24.728651  208070 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:44:25.535462  208070 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:44:25.536454  208070 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:44:25.541769  208070 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:44:24.885030  205527 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:44:24.885700  205527 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:44:24.981908  205527 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:44:25.116233  205527 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:44:26.261289  205527 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:44:27.569320  205527 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:44:28.481763  205527 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:44:28.481866  205527 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:44:28.481937  205527 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:44:25.545154  208070 out.go:252]   - Booting up control plane ...
	I1123 08:44:25.545264  208070 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:44:25.545349  208070 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:44:25.553578  208070 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:44:25.585938  208070 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:44:25.586047  208070 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:44:25.595406  208070 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:44:25.596179  208070 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:44:25.596755  208070 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:44:25.757731  208070 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:44:25.757851  208070 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:44:27.261764  208070 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501629862s
	I1123 08:44:27.262765  208070 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:44:27.262946  208070 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 08:44:27.263667  208070 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:44:27.263789  208070 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:44:28.485513  205527 out.go:252]   - Booting up control plane ...
	I1123 08:44:28.485630  205527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:44:28.485718  205527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:44:28.485792  205527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:44:28.518445  205527 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:44:28.518807  205527 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:44:28.527558  205527 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:44:28.527817  205527 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:44:28.528000  205527 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:44:28.757903  205527 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:44:28.758023  205527 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:44:30.249804  205527 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501494619s
	I1123 08:44:30.252283  205527 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:44:30.252497  205527 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1123 08:44:30.252591  205527 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:44:30.252996  205527 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:44:33.983655  208070 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.719559831s
	I1123 08:44:37.226412  205527 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.973174117s
	I1123 08:44:39.198708  205527 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.9452814s
	I1123 08:44:39.771010  208070 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 12.507763754s
	I1123 08:44:40.632942  208070 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 13.366512561s
	I1123 08:44:40.682010  208070 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:44:40.718781  208070 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:44:40.742339  208070 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:44:40.742572  208070 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-230843 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:44:40.776769  208070 kubeadm.go:319] [bootstrap-token] Using token: iuvxjy.jwvovxgrh4ynkuhf
	I1123 08:44:41.255275  205527 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.002632231s
	I1123 08:44:41.280712  205527 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:44:41.302399  205527 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:44:41.333475  205527 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:44:41.333687  205527 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-596617 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:44:41.361156  205527 kubeadm.go:319] [bootstrap-token] Using token: 97edkx.mk6s7acezl06y535
	I1123 08:44:40.779696  208070 out.go:252]   - Configuring RBAC rules ...
	I1123 08:44:40.779818  208070 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:44:40.785375  208070 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:44:40.796035  208070 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:44:40.804646  208070 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:44:40.812545  208070 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:44:40.817495  208070 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:44:41.038524  208070 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:44:41.474381  208070 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:44:42.042420  208070 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:44:42.042440  208070 kubeadm.go:319] 
	I1123 08:44:42.042517  208070 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:44:42.042521  208070 kubeadm.go:319] 
	I1123 08:44:42.042599  208070 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:44:42.042603  208070 kubeadm.go:319] 
	I1123 08:44:42.042628  208070 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:44:42.042696  208070 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:44:42.042748  208070 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:44:42.042752  208070 kubeadm.go:319] 
	I1123 08:44:42.042806  208070 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:44:42.042809  208070 kubeadm.go:319] 
	I1123 08:44:42.042857  208070 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:44:42.042861  208070 kubeadm.go:319] 
	I1123 08:44:42.042918  208070 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:44:42.042995  208070 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:44:42.043072  208070 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:44:42.043076  208070 kubeadm.go:319] 
	I1123 08:44:42.043163  208070 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:44:42.043240  208070 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:44:42.043246  208070 kubeadm.go:319] 
	I1123 08:44:42.043330  208070 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token iuvxjy.jwvovxgrh4ynkuhf \
	I1123 08:44:42.043433  208070 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4f35f48b47910e0f0424b1b0ace7d03cfc1e6ef5b162b679e98eef4f3a64a5a5 \
	I1123 08:44:42.043453  208070 kubeadm.go:319] 	--control-plane 
	I1123 08:44:42.043457  208070 kubeadm.go:319] 
	I1123 08:44:42.043542  208070 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:44:42.043546  208070 kubeadm.go:319] 
	I1123 08:44:42.043627  208070 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token iuvxjy.jwvovxgrh4ynkuhf \
	I1123 08:44:42.043730  208070 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4f35f48b47910e0f0424b1b0ace7d03cfc1e6ef5b162b679e98eef4f3a64a5a5 
	I1123 08:44:42.049147  208070 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 08:44:42.049553  208070 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 08:44:42.049687  208070 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:44:42.049700  208070 cni.go:84] Creating CNI manager for ""
	I1123 08:44:42.049708  208070 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:44:42.052990  208070 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:44:41.364258  205527 out.go:252]   - Configuring RBAC rules ...
	I1123 08:44:41.364387  205527 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:44:41.377550  205527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:44:41.392237  205527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:44:41.398358  205527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:44:41.409745  205527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:44:41.414534  205527 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:44:41.665018  205527 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:44:42.149327  205527 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:44:42.666223  205527 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:44:42.667502  205527 kubeadm.go:319] 
	I1123 08:44:42.667573  205527 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:44:42.667578  205527 kubeadm.go:319] 
	I1123 08:44:42.667663  205527 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:44:42.667668  205527 kubeadm.go:319] 
	I1123 08:44:42.667693  205527 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:44:42.667752  205527 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:44:42.667802  205527 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:44:42.667806  205527 kubeadm.go:319] 
	I1123 08:44:42.667859  205527 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:44:42.667863  205527 kubeadm.go:319] 
	I1123 08:44:42.667910  205527 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:44:42.667914  205527 kubeadm.go:319] 
	I1123 08:44:42.667965  205527 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:44:42.668040  205527 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:44:42.668115  205527 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:44:42.668119  205527 kubeadm.go:319] 
	I1123 08:44:42.668203  205527 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:44:42.668279  205527 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:44:42.668283  205527 kubeadm.go:319] 
	I1123 08:44:42.668367  205527 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 97edkx.mk6s7acezl06y535 \
	I1123 08:44:42.668472  205527 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4f35f48b47910e0f0424b1b0ace7d03cfc1e6ef5b162b679e98eef4f3a64a5a5 \
	I1123 08:44:42.668493  205527 kubeadm.go:319] 	--control-plane 
	I1123 08:44:42.668496  205527 kubeadm.go:319] 
	I1123 08:44:42.668582  205527 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:44:42.668586  205527 kubeadm.go:319] 
	I1123 08:44:42.671972  205527 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 97edkx.mk6s7acezl06y535 \
	I1123 08:44:42.672086  205527 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4f35f48b47910e0f0424b1b0ace7d03cfc1e6ef5b162b679e98eef4f3a64a5a5 
	I1123 08:44:42.673580  205527 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 08:44:42.673804  205527 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 08:44:42.673908  205527 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:44:42.673925  205527 cni.go:84] Creating CNI manager for ""
	I1123 08:44:42.673932  205527 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:44:42.677246  205527 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:44:42.055995  208070 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:44:42.065552  208070 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:44:42.065580  208070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:44:42.092663  208070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:44:42.624040  208070 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:44:42.624174  208070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:42.624244  208070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-230843 minikube.k8s.io/updated_at=2025_11_23T08_44_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=embed-certs-230843 minikube.k8s.io/primary=true
	I1123 08:44:43.027510  208070 ops.go:34] apiserver oom_adj: -16
	I1123 08:44:43.027614  208070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:43.527981  208070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:42.680128  205527 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:44:42.685968  205527 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:44:42.685991  205527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:44:42.712176  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:44:43.084923  205527 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:44:43.085041  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:43.085103  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-596617 minikube.k8s.io/updated_at=2025_11_23T08_44_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=no-preload-596617 minikube.k8s.io/primary=true
	I1123 08:44:43.321444  205527 ops.go:34] apiserver oom_adj: -16
	I1123 08:44:43.321545  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:43.822205  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:44.321884  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:44.027776  208070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:44.528512  208070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:45.030313  208070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:45.527732  208070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:45.669462  208070 kubeadm.go:1114] duration metric: took 3.04533404s to wait for elevateKubeSystemPrivileges
	I1123 08:44:45.669502  208070 kubeadm.go:403] duration metric: took 29.842585167s to StartCluster
	I1123 08:44:45.669520  208070 settings.go:142] acquiring lock: {Name:mkfb77243b31dfe604b438e7da3f1bce2ba7b5a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:45.669588  208070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:44:45.670632  208070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/kubeconfig: {Name:mka042f83263da2d190b70c2277735bf705fab5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:45.670836  208070 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:44:45.670982  208070 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:44:45.671192  208070 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:44:45.671259  208070 config.go:182] Loaded profile config "embed-certs-230843": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:44:45.671270  208070 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-230843"
	I1123 08:44:45.671287  208070 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-230843"
	I1123 08:44:45.671295  208070 addons.go:70] Setting default-storageclass=true in profile "embed-certs-230843"
	I1123 08:44:45.671307  208070 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-230843"
	I1123 08:44:45.671311  208070 host.go:66] Checking if "embed-certs-230843" exists ...
	I1123 08:44:45.671601  208070 cli_runner.go:164] Run: docker container inspect embed-certs-230843 --format={{.State.Status}}
	I1123 08:44:45.671756  208070 cli_runner.go:164] Run: docker container inspect embed-certs-230843 --format={{.State.Status}}
	I1123 08:44:45.705042  208070 out.go:179] * Verifying Kubernetes components...
	I1123 08:44:45.722693  208070 addons.go:239] Setting addon default-storageclass=true in "embed-certs-230843"
	I1123 08:44:45.722743  208070 host.go:66] Checking if "embed-certs-230843" exists ...
	I1123 08:44:45.723200  208070 cli_runner.go:164] Run: docker container inspect embed-certs-230843 --format={{.State.Status}}
	I1123 08:44:45.732027  208070 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:44.822635  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:45.321717  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:45.822616  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:46.322263  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:46.822066  205527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:47.081276  205527 kubeadm.go:1114] duration metric: took 3.996277021s to wait for elevateKubeSystemPrivileges
	I1123 08:44:47.081308  205527 kubeadm.go:403] duration metric: took 29.781909942s to StartCluster
	I1123 08:44:47.081325  205527 settings.go:142] acquiring lock: {Name:mkfb77243b31dfe604b438e7da3f1bce2ba7b5a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:47.081450  205527 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:44:47.082908  205527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/kubeconfig: {Name:mka042f83263da2d190b70c2277735bf705fab5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:47.083151  205527 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:44:47.083421  205527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:44:47.083650  205527 config.go:182] Loaded profile config "no-preload-596617": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:44:47.083575  205527 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:44:47.083672  205527 addons.go:70] Setting storage-provisioner=true in profile "no-preload-596617"
	I1123 08:44:47.083680  205527 addons.go:70] Setting default-storageclass=true in profile "no-preload-596617"
	I1123 08:44:47.083692  205527 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-596617"
	I1123 08:44:47.083693  205527 addons.go:239] Setting addon storage-provisioner=true in "no-preload-596617"
	I1123 08:44:47.083720  205527 host.go:66] Checking if "no-preload-596617" exists ...
	I1123 08:44:47.083983  205527 cli_runner.go:164] Run: docker container inspect no-preload-596617 --format={{.State.Status}}
	I1123 08:44:47.084208  205527 cli_runner.go:164] Run: docker container inspect no-preload-596617 --format={{.State.Status}}
	I1123 08:44:47.089154  205527 out.go:179] * Verifying Kubernetes components...
	I1123 08:44:47.093108  205527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:47.126277  205527 addons.go:239] Setting addon default-storageclass=true in "no-preload-596617"
	I1123 08:44:47.126314  205527 host.go:66] Checking if "no-preload-596617" exists ...
	I1123 08:44:47.126723  205527 cli_runner.go:164] Run: docker container inspect no-preload-596617 --format={{.State.Status}}
	I1123 08:44:47.140121  205527 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:45.732862  208070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:45.746787  208070 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:45.746809  208070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:44:45.746870  208070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-230843
	I1123 08:44:45.761834  208070 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:45.761862  208070 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:44:45.761920  208070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-230843
	I1123 08:44:45.789622  208070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/embed-certs-230843/id_rsa Username:docker}
	I1123 08:44:45.802213  208070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/embed-certs-230843/id_rsa Username:docker}
	I1123 08:44:46.409851  208070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:46.457002  208070 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:44:46.457109  208070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:44:46.529172  208070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:48.581011  208070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.171129183s)
	I1123 08:44:48.581075  208070 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.123947846s)
	I1123 08:44:48.582164  208070 node_ready.go:35] waiting up to 6m0s for node "embed-certs-230843" to be "Ready" ...
	I1123 08:44:48.582493  208070 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.125465322s)
	I1123 08:44:48.582516  208070 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 08:44:48.583747  208070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.05450542s)
	I1123 08:44:48.679888  208070 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 08:44:48.683541  208070 addons.go:530] duration metric: took 3.012345078s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:44:47.144257  205527 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:47.144295  205527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:44:47.144377  205527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:44:47.172114  205527 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:47.172134  205527 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:44:47.172201  205527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:44:47.199200  205527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:44:47.223873  205527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:44:47.941786  205527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:48.030147  205527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:44:48.030298  205527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:44:48.055987  205527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:49.261973  205527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.320107567s)
	I1123 08:44:49.262064  205527 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.231848863s)
	I1123 08:44:49.262084  205527 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 08:44:49.263528  205527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.207474353s)
	I1123 08:44:49.264287  205527 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.23169572s)
	I1123 08:44:49.266043  205527 node_ready.go:35] waiting up to 6m0s for node "no-preload-596617" to be "Ready" ...
	I1123 08:44:49.329816  205527 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 08:44:49.332649  205527 addons.go:530] duration metric: took 2.249069031s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:44:49.087300  208070 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-230843" context rescaled to 1 replicas
	W1123 08:44:50.586227  208070 node_ready.go:57] node "embed-certs-230843" has "Ready":"False" status (will retry)
	W1123 08:44:53.085172  208070 node_ready.go:57] node "embed-certs-230843" has "Ready":"False" status (will retry)
	I1123 08:44:49.767741  205527 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-596617" context rescaled to 1 replicas
	W1123 08:44:51.269363  205527 node_ready.go:57] node "no-preload-596617" has "Ready":"False" status (will retry)
	W1123 08:44:53.768931  205527 node_ready.go:57] node "no-preload-596617" has "Ready":"False" status (will retry)
	W1123 08:44:55.085253  208070 node_ready.go:57] node "embed-certs-230843" has "Ready":"False" status (will retry)
	W1123 08:44:57.085474  208070 node_ready.go:57] node "embed-certs-230843" has "Ready":"False" status (will retry)
	W1123 08:44:56.268932  205527 node_ready.go:57] node "no-preload-596617" has "Ready":"False" status (will retry)
	W1123 08:44:58.768622  205527 node_ready.go:57] node "no-preload-596617" has "Ready":"False" status (will retry)
	W1123 08:44:59.585786  208070 node_ready.go:57] node "embed-certs-230843" has "Ready":"False" status (will retry)
	W1123 08:45:02.085546  208070 node_ready.go:57] node "embed-certs-230843" has "Ready":"False" status (will retry)
	W1123 08:45:00.770654  205527 node_ready.go:57] node "no-preload-596617" has "Ready":"False" status (will retry)
	I1123 08:45:01.281251  205527 node_ready.go:49] node "no-preload-596617" is "Ready"
	I1123 08:45:01.281290  205527 node_ready.go:38] duration metric: took 12.015221271s for node "no-preload-596617" to be "Ready" ...
	I1123 08:45:01.281309  205527 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:45:01.281377  205527 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:45:01.316855  205527 api_server.go:72] duration metric: took 14.23366653s to wait for apiserver process to appear ...
	I1123 08:45:01.316887  205527 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:45:01.316908  205527 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 08:45:01.327003  205527 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 08:45:01.328385  205527 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:01.328417  205527 api_server.go:131] duration metric: took 11.522392ms to wait for apiserver health ...
	I1123 08:45:01.328428  205527 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:01.333329  205527 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:01.333376  205527 system_pods.go:61] "coredns-66bc5c9577-spk2c" [7d69a45e-abdd-4480-8b79-7bb112b3eb7f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:01.333384  205527 system_pods.go:61] "etcd-no-preload-596617" [02c8be1f-eaf7-42ea-95a7-62ab46ad07df] Running
	I1123 08:45:01.333390  205527 system_pods.go:61] "kindnet-68b4f" [1e512ae4-2f16-4e9d-898a-51c754a6d8d7] Running
	I1123 08:45:01.333395  205527 system_pods.go:61] "kube-apiserver-no-preload-596617" [976a1c84-6531-4143-b0de-3e22a2abe7eb] Running
	I1123 08:45:01.333449  205527 system_pods.go:61] "kube-controller-manager-no-preload-596617" [4c89da72-8e49-48f2-a2a3-cc52f957a0dd] Running
	I1123 08:45:01.333459  205527 system_pods.go:61] "kube-proxy-sq84q" [a70ddc44-854e-4253-aa99-0bd199e34d0e] Running
	I1123 08:45:01.333464  205527 system_pods.go:61] "kube-scheduler-no-preload-596617" [9bc661dd-7e92-4eba-b278-3a8a28862c53] Running
	I1123 08:45:01.333473  205527 system_pods.go:61] "storage-provisioner" [bbf4fd29-62c7-49d8-b210-930c2bd6c7b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:01.333487  205527 system_pods.go:74] duration metric: took 5.048629ms to wait for pod list to return data ...
	I1123 08:45:01.333496  205527 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:01.336410  205527 default_sa.go:45] found service account: "default"
	I1123 08:45:01.336444  205527 default_sa.go:55] duration metric: took 2.939943ms for default service account to be created ...
	I1123 08:45:01.336471  205527 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:01.342514  205527 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:01.342597  205527 system_pods.go:89] "coredns-66bc5c9577-spk2c" [7d69a45e-abdd-4480-8b79-7bb112b3eb7f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:01.342624  205527 system_pods.go:89] "etcd-no-preload-596617" [02c8be1f-eaf7-42ea-95a7-62ab46ad07df] Running
	I1123 08:45:01.342670  205527 system_pods.go:89] "kindnet-68b4f" [1e512ae4-2f16-4e9d-898a-51c754a6d8d7] Running
	I1123 08:45:01.342694  205527 system_pods.go:89] "kube-apiserver-no-preload-596617" [976a1c84-6531-4143-b0de-3e22a2abe7eb] Running
	I1123 08:45:01.342714  205527 system_pods.go:89] "kube-controller-manager-no-preload-596617" [4c89da72-8e49-48f2-a2a3-cc52f957a0dd] Running
	I1123 08:45:01.342737  205527 system_pods.go:89] "kube-proxy-sq84q" [a70ddc44-854e-4253-aa99-0bd199e34d0e] Running
	I1123 08:45:01.342771  205527 system_pods.go:89] "kube-scheduler-no-preload-596617" [9bc661dd-7e92-4eba-b278-3a8a28862c53] Running
	I1123 08:45:01.342797  205527 system_pods.go:89] "storage-provisioner" [bbf4fd29-62c7-49d8-b210-930c2bd6c7b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:01.342842  205527 retry.go:31] will retry after 302.46538ms: missing components: kube-dns
	I1123 08:45:01.650129  205527 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:01.650212  205527 system_pods.go:89] "coredns-66bc5c9577-spk2c" [7d69a45e-abdd-4480-8b79-7bb112b3eb7f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:01.650238  205527 system_pods.go:89] "etcd-no-preload-596617" [02c8be1f-eaf7-42ea-95a7-62ab46ad07df] Running
	I1123 08:45:01.650306  205527 system_pods.go:89] "kindnet-68b4f" [1e512ae4-2f16-4e9d-898a-51c754a6d8d7] Running
	I1123 08:45:01.650335  205527 system_pods.go:89] "kube-apiserver-no-preload-596617" [976a1c84-6531-4143-b0de-3e22a2abe7eb] Running
	I1123 08:45:01.650357  205527 system_pods.go:89] "kube-controller-manager-no-preload-596617" [4c89da72-8e49-48f2-a2a3-cc52f957a0dd] Running
	I1123 08:45:01.650379  205527 system_pods.go:89] "kube-proxy-sq84q" [a70ddc44-854e-4253-aa99-0bd199e34d0e] Running
	I1123 08:45:01.650414  205527 system_pods.go:89] "kube-scheduler-no-preload-596617" [9bc661dd-7e92-4eba-b278-3a8a28862c53] Running
	I1123 08:45:01.650438  205527 system_pods.go:89] "storage-provisioner" [bbf4fd29-62c7-49d8-b210-930c2bd6c7b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:01.650467  205527 retry.go:31] will retry after 375.532029ms: missing components: kube-dns
	I1123 08:45:02.048211  205527 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:02.048306  205527 system_pods.go:89] "coredns-66bc5c9577-spk2c" [7d69a45e-abdd-4480-8b79-7bb112b3eb7f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:02.048331  205527 system_pods.go:89] "etcd-no-preload-596617" [02c8be1f-eaf7-42ea-95a7-62ab46ad07df] Running
	I1123 08:45:02.048374  205527 system_pods.go:89] "kindnet-68b4f" [1e512ae4-2f16-4e9d-898a-51c754a6d8d7] Running
	I1123 08:45:02.048401  205527 system_pods.go:89] "kube-apiserver-no-preload-596617" [976a1c84-6531-4143-b0de-3e22a2abe7eb] Running
	I1123 08:45:02.048425  205527 system_pods.go:89] "kube-controller-manager-no-preload-596617" [4c89da72-8e49-48f2-a2a3-cc52f957a0dd] Running
	I1123 08:45:02.048452  205527 system_pods.go:89] "kube-proxy-sq84q" [a70ddc44-854e-4253-aa99-0bd199e34d0e] Running
	I1123 08:45:02.048486  205527 system_pods.go:89] "kube-scheduler-no-preload-596617" [9bc661dd-7e92-4eba-b278-3a8a28862c53] Running
	I1123 08:45:02.048522  205527 system_pods.go:89] "storage-provisioner" [bbf4fd29-62c7-49d8-b210-930c2bd6c7b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:02.048555  205527 retry.go:31] will retry after 443.454233ms: missing components: kube-dns
	I1123 08:45:02.496582  205527 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:02.496614  205527 system_pods.go:89] "coredns-66bc5c9577-spk2c" [7d69a45e-abdd-4480-8b79-7bb112b3eb7f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:02.496620  205527 system_pods.go:89] "etcd-no-preload-596617" [02c8be1f-eaf7-42ea-95a7-62ab46ad07df] Running
	I1123 08:45:02.496632  205527 system_pods.go:89] "kindnet-68b4f" [1e512ae4-2f16-4e9d-898a-51c754a6d8d7] Running
	I1123 08:45:02.496637  205527 system_pods.go:89] "kube-apiserver-no-preload-596617" [976a1c84-6531-4143-b0de-3e22a2abe7eb] Running
	I1123 08:45:02.496642  205527 system_pods.go:89] "kube-controller-manager-no-preload-596617" [4c89da72-8e49-48f2-a2a3-cc52f957a0dd] Running
	I1123 08:45:02.496646  205527 system_pods.go:89] "kube-proxy-sq84q" [a70ddc44-854e-4253-aa99-0bd199e34d0e] Running
	I1123 08:45:02.496650  205527 system_pods.go:89] "kube-scheduler-no-preload-596617" [9bc661dd-7e92-4eba-b278-3a8a28862c53] Running
	I1123 08:45:02.496656  205527 system_pods.go:89] "storage-provisioner" [bbf4fd29-62c7-49d8-b210-930c2bd6c7b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:02.496669  205527 retry.go:31] will retry after 464.392772ms: missing components: kube-dns
	I1123 08:45:02.965614  205527 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:02.965648  205527 system_pods.go:89] "coredns-66bc5c9577-spk2c" [7d69a45e-abdd-4480-8b79-7bb112b3eb7f] Running
	I1123 08:45:02.965656  205527 system_pods.go:89] "etcd-no-preload-596617" [02c8be1f-eaf7-42ea-95a7-62ab46ad07df] Running
	I1123 08:45:02.965661  205527 system_pods.go:89] "kindnet-68b4f" [1e512ae4-2f16-4e9d-898a-51c754a6d8d7] Running
	I1123 08:45:02.965665  205527 system_pods.go:89] "kube-apiserver-no-preload-596617" [976a1c84-6531-4143-b0de-3e22a2abe7eb] Running
	I1123 08:45:02.965670  205527 system_pods.go:89] "kube-controller-manager-no-preload-596617" [4c89da72-8e49-48f2-a2a3-cc52f957a0dd] Running
	I1123 08:45:02.965674  205527 system_pods.go:89] "kube-proxy-sq84q" [a70ddc44-854e-4253-aa99-0bd199e34d0e] Running
	I1123 08:45:02.965677  205527 system_pods.go:89] "kube-scheduler-no-preload-596617" [9bc661dd-7e92-4eba-b278-3a8a28862c53] Running
	I1123 08:45:02.965681  205527 system_pods.go:89] "storage-provisioner" [bbf4fd29-62c7-49d8-b210-930c2bd6c7b4] Running
	I1123 08:45:02.965689  205527 system_pods.go:126] duration metric: took 1.629210644s to wait for k8s-apps to be running ...
	I1123 08:45:02.965701  205527 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:02.965758  205527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:02.985453  205527 system_svc.go:56] duration metric: took 19.742114ms WaitForService to wait for kubelet
	I1123 08:45:02.985481  205527 kubeadm.go:587] duration metric: took 15.902298083s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:02.985499  205527 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:02.988643  205527 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:45:02.988678  205527 node_conditions.go:123] node cpu capacity is 2
	I1123 08:45:02.988692  205527 node_conditions.go:105] duration metric: took 3.187494ms to run NodePressure ...
	I1123 08:45:02.988705  205527 start.go:242] waiting for startup goroutines ...
	I1123 08:45:02.988712  205527 start.go:247] waiting for cluster config update ...
	I1123 08:45:02.988725  205527 start.go:256] writing updated cluster config ...
	I1123 08:45:02.989017  205527 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:02.993812  205527 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:02.997841  205527 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-spk2c" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:03.004721  205527 pod_ready.go:94] pod "coredns-66bc5c9577-spk2c" is "Ready"
	I1123 08:45:03.004756  205527 pod_ready.go:86] duration metric: took 6.885986ms for pod "coredns-66bc5c9577-spk2c" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:03.007413  205527 pod_ready.go:83] waiting for pod "etcd-no-preload-596617" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:03.012962  205527 pod_ready.go:94] pod "etcd-no-preload-596617" is "Ready"
	I1123 08:45:03.012996  205527 pod_ready.go:86] duration metric: took 5.544062ms for pod "etcd-no-preload-596617" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:03.015650  205527 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-596617" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:03.020395  205527 pod_ready.go:94] pod "kube-apiserver-no-preload-596617" is "Ready"
	I1123 08:45:03.020426  205527 pod_ready.go:86] duration metric: took 4.745775ms for pod "kube-apiserver-no-preload-596617" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:03.023005  205527 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-596617" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:03.397709  205527 pod_ready.go:94] pod "kube-controller-manager-no-preload-596617" is "Ready"
	I1123 08:45:03.397742  205527 pod_ready.go:86] duration metric: took 374.711235ms for pod "kube-controller-manager-no-preload-596617" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:03.598194  205527 pod_ready.go:83] waiting for pod "kube-proxy-sq84q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:03.998648  205527 pod_ready.go:94] pod "kube-proxy-sq84q" is "Ready"
	I1123 08:45:03.998683  205527 pod_ready.go:86] duration metric: took 400.460193ms for pod "kube-proxy-sq84q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:04.198303  205527 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-596617" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:04.597794  205527 pod_ready.go:94] pod "kube-scheduler-no-preload-596617" is "Ready"
	I1123 08:45:04.597822  205527 pod_ready.go:86] duration metric: took 399.49259ms for pod "kube-scheduler-no-preload-596617" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:04.597837  205527 pod_ready.go:40] duration metric: took 1.603993881s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:04.657432  205527 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 08:45:04.660624  205527 out.go:179] * Done! kubectl is now configured to use "no-preload-596617" cluster and "default" namespace by default
	W1123 08:45:04.586000  208070 node_ready.go:57] node "embed-certs-230843" has "Ready":"False" status (will retry)
	W1123 08:45:07.085884  208070 node_ready.go:57] node "embed-certs-230843" has "Ready":"False" status (will retry)
	W1123 08:45:09.585058  208070 node_ready.go:57] node "embed-certs-230843" has "Ready":"False" status (will retry)
	W1123 08:45:11.585648  208070 node_ready.go:57] node "embed-certs-230843" has "Ready":"False" status (will retry)
	W1123 08:45:13.586999  208070 node_ready.go:57] node "embed-certs-230843" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	72dc2e979e5ad       1611cd07b61d5       9 seconds ago       Running             busybox                   0                   2df5c11b9426b       busybox                                     default
	688a87c4a6cfd       138784d87c9c5       14 seconds ago      Running             coredns                   0                   46b94f7bae627       coredns-66bc5c9577-spk2c                    kube-system
	91c445761c112       66749159455b3       14 seconds ago      Running             storage-provisioner       0                   9315df4ee20b9       storage-provisioner                         kube-system
	4ff14e6367451       b1a8c6f707935       25 seconds ago      Running             kindnet-cni               0                   fd291fa8cf12b       kindnet-68b4f                               kube-system
	38a03d8690d80       05baa95f5142d       27 seconds ago      Running             kube-proxy                0                   0fe71124eeed0       kube-proxy-sq84q                            kube-system
	ae63305653ca8       a1894772a478e       45 seconds ago      Running             etcd                      0                   0fe3b525b0fb7       etcd-no-preload-596617                      kube-system
	2e1de07c6493d       7eb2c6ff0c5a7       45 seconds ago      Running             kube-controller-manager   0                   25f22a830344d       kube-controller-manager-no-preload-596617   kube-system
	3922d3ac1a3fa       b5f57ec6b9867       45 seconds ago      Running             kube-scheduler            0                   df50c7fad9f34       kube-scheduler-no-preload-596617            kube-system
	0106e17e619c2       43911e833d64d       45 seconds ago      Running             kube-apiserver            0                   4154848c60b4a       kube-apiserver-no-preload-596617            kube-system
	
	
	==> containerd <==
	Nov 23 08:45:01 no-preload-596617 containerd[757]: time="2025-11-23T08:45:01.726304253Z" level=info msg="connecting to shim 91c445761c11225f240bc25605c50446bcaa23a89a3ee6c7f275c64941c44788" address="unix:///run/containerd/s/e9e73f3bd70fd8296c0530b6ceadaa40b81c451329c7b588036267e425341a63" protocol=ttrpc version=3
	Nov 23 08:45:01 no-preload-596617 containerd[757]: time="2025-11-23T08:45:01.761886017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-spk2c,Uid:7d69a45e-abdd-4480-8b79-7bb112b3eb7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"46b94f7bae62779e99be4f918ef19eb1e3dccdb17404b7c5e774669b02193eb4\""
	Nov 23 08:45:01 no-preload-596617 containerd[757]: time="2025-11-23T08:45:01.772237457Z" level=info msg="CreateContainer within sandbox \"46b94f7bae62779e99be4f918ef19eb1e3dccdb17404b7c5e774669b02193eb4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 23 08:45:01 no-preload-596617 containerd[757]: time="2025-11-23T08:45:01.788536099Z" level=info msg="Container 688a87c4a6cfdbcdf8876e5686af2cb559d7878123fea8977dff105b58a52002: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:45:01 no-preload-596617 containerd[757]: time="2025-11-23T08:45:01.800926941Z" level=info msg="CreateContainer within sandbox \"46b94f7bae62779e99be4f918ef19eb1e3dccdb17404b7c5e774669b02193eb4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"688a87c4a6cfdbcdf8876e5686af2cb559d7878123fea8977dff105b58a52002\""
	Nov 23 08:45:01 no-preload-596617 containerd[757]: time="2025-11-23T08:45:01.801901674Z" level=info msg="StartContainer for \"688a87c4a6cfdbcdf8876e5686af2cb559d7878123fea8977dff105b58a52002\""
	Nov 23 08:45:01 no-preload-596617 containerd[757]: time="2025-11-23T08:45:01.802839950Z" level=info msg="connecting to shim 688a87c4a6cfdbcdf8876e5686af2cb559d7878123fea8977dff105b58a52002" address="unix:///run/containerd/s/b4521a10350225922a802893d08b1e1e12eff30058b0dcfad667d18b30409d6a" protocol=ttrpc version=3
	Nov 23 08:45:01 no-preload-596617 containerd[757]: time="2025-11-23T08:45:01.885754303Z" level=info msg="StartContainer for \"91c445761c11225f240bc25605c50446bcaa23a89a3ee6c7f275c64941c44788\" returns successfully"
	Nov 23 08:45:01 no-preload-596617 containerd[757]: time="2025-11-23T08:45:01.941730021Z" level=info msg="StartContainer for \"688a87c4a6cfdbcdf8876e5686af2cb559d7878123fea8977dff105b58a52002\" returns successfully"
	Nov 23 08:45:05 no-preload-596617 containerd[757]: time="2025-11-23T08:45:05.207962028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:9b93317d-72f3-440c-9896-cb6d0b98f255,Namespace:default,Attempt:0,}"
	Nov 23 08:45:05 no-preload-596617 containerd[757]: time="2025-11-23T08:45:05.266783434Z" level=info msg="connecting to shim 2df5c11b9426b4afdee47f6090f6d6cca0d3f5ee73778e073294963eba47e889" address="unix:///run/containerd/s/3b0c63e21a15375d0fdbf13d68951d1c2de67b19ee97324c432ce465fb88e12e" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:45:05 no-preload-596617 containerd[757]: time="2025-11-23T08:45:05.339894167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:9b93317d-72f3-440c-9896-cb6d0b98f255,Namespace:default,Attempt:0,} returns sandbox id \"2df5c11b9426b4afdee47f6090f6d6cca0d3f5ee73778e073294963eba47e889\""
	Nov 23 08:45:05 no-preload-596617 containerd[757]: time="2025-11-23T08:45:05.342066264Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.300023316Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.301839790Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937187"
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.304099215Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.308076085Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 1.965966761s"
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.308290217Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.313601554Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.322804729Z" level=info msg="CreateContainer within sandbox \"2df5c11b9426b4afdee47f6090f6d6cca0d3f5ee73778e073294963eba47e889\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.337991545Z" level=info msg="Container 72dc2e979e5addf37d709e0deb3678494c8888240d86a15cd721576bfc1803bd: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.347996610Z" level=info msg="CreateContainer within sandbox \"2df5c11b9426b4afdee47f6090f6d6cca0d3f5ee73778e073294963eba47e889\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"72dc2e979e5addf37d709e0deb3678494c8888240d86a15cd721576bfc1803bd\""
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.349322461Z" level=info msg="StartContainer for \"72dc2e979e5addf37d709e0deb3678494c8888240d86a15cd721576bfc1803bd\""
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.350714954Z" level=info msg="connecting to shim 72dc2e979e5addf37d709e0deb3678494c8888240d86a15cd721576bfc1803bd" address="unix:///run/containerd/s/3b0c63e21a15375d0fdbf13d68951d1c2de67b19ee97324c432ce465fb88e12e" protocol=ttrpc version=3
	Nov 23 08:45:07 no-preload-596617 containerd[757]: time="2025-11-23T08:45:07.413577321Z" level=info msg="StartContainer for \"72dc2e979e5addf37d709e0deb3678494c8888240d86a15cd721576bfc1803bd\" returns successfully"
	
	
	==> coredns [688a87c4a6cfdbcdf8876e5686af2cb559d7878123fea8977dff105b58a52002] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51486 - 2615 "HINFO IN 1421145875051200784.6464575213870468913. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004837337s
	
	
	==> describe nodes <==
	Name:               no-preload-596617
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-596617
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=no-preload-596617
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_44_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:44:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-596617
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:45:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:45:13 +0000   Sun, 23 Nov 2025 08:44:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:45:13 +0000   Sun, 23 Nov 2025 08:44:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:45:13 +0000   Sun, 23 Nov 2025 08:44:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:45:13 +0000   Sun, 23 Nov 2025 08:45:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-596617
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                6cbb2352-56dd-44f5-96aa-57c90ae6b957
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-spk2c                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-no-preload-596617                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-68b4f                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-596617             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-596617    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-sq84q                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-596617             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node no-preload-596617 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node no-preload-596617 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     47s (x7 over 47s)  kubelet          Node no-preload-596617 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  47s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  34s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node no-preload-596617 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node no-preload-596617 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s                kubelet          Node no-preload-596617 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           30s                node-controller  Node no-preload-596617 event: Registered Node no-preload-596617 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-596617 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015154] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.511595] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034200] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753844] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.833249] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:37] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/22/fs': -2
	
	
	==> etcd [ae63305653ca8cbbd80c13dd0f9434bfc3feedc3bbff30a329f62b0559f2895a] <==
	{"level":"warn","ts":"2025-11-23T08:44:36.634259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:36.678987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:36.765616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:36.811318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:36.870363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:36.916784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:36.969796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.033670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.071150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.132866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.166166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.225907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.263366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.297268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.328395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.353157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.416623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.444153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.490692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.504255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.537148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.567760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.622561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.653238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:37.821517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56888","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:45:16 up  1:27,  0 user,  load average: 4.05, 3.89, 3.21
	Linux no-preload-596617 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4ff14e63674511be9833e17757d7ac8c83cf043c373fdfaeba96b335a278376f] <==
	I1123 08:44:50.864346       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:44:50.865396       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:44:50.865567       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:44:50.865578       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:44:50.865592       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:44:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:44:51.160945       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:44:51.161137       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:44:51.161228       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:44:51.162129       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:44:51.362112       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:44:51.362241       1 metrics.go:72] Registering metrics
	I1123 08:44:51.362346       1 controller.go:711] "Syncing nftables rules"
	I1123 08:45:01.164406       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:45:01.164486       1 main.go:301] handling current node
	I1123 08:45:11.161927       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:45:11.161968       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0106e17e619c21c4f70b18b18b785e598009939262000db201255c4c23134bb6] <==
	I1123 08:44:39.263837       1 policy_source.go:240] refreshing policies
	I1123 08:44:39.311438       1 controller.go:667] quota admission added evaluator for: namespaces
	E1123 08:44:39.323488       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1123 08:44:39.372476       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:39.372740       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:44:39.414601       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:39.415614       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:44:39.477796       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:44:39.687350       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:44:39.699877       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:44:39.700083       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:44:40.943845       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:44:41.002489       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:44:41.090414       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:44:41.127424       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 08:44:41.128745       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:44:41.145314       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:44:41.150733       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:44:42.092836       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:44:42.147907       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:44:42.171337       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:44:46.685247       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:46.695371       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:46.843823       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 08:44:47.048758       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2e1de07c6493d308c8cde1bd08ad1af4bde14c9a11d6c18de05914a462d0021b] <==
	I1123 08:44:46.228539       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:44:46.228651       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 08:44:46.228685       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:44:46.229565       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 08:44:46.229836       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 08:44:46.233098       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 08:44:46.242563       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 08:44:46.244450       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:44:46.244635       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:44:46.244765       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-596617"
	I1123 08:44:46.244847       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 08:44:46.259767       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 08:44:46.313593       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:44:46.315103       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:44:46.315422       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:44:46.315855       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:44:46.315951       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:44:46.316036       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:44:46.316214       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:44:46.412608       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-596617" podCIDRs=["10.244.0.0/24"]
	I1123 08:44:46.414393       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:44:46.430115       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:44:46.430144       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:44:46.430151       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:45:01.247537       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [38a03d8690d80f6c742953b846418123550408e6b4fc3bc3ed61b8578754af02] <==
	I1123 08:44:49.072187       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:44:49.179435       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:44:49.285555       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:44:49.285588       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 08:44:49.285667       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:44:49.385594       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:44:49.385658       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:44:49.391244       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:44:49.391562       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:44:49.391575       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:44:49.393241       1 config.go:200] "Starting service config controller"
	I1123 08:44:49.393255       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:44:49.403296       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:44:49.403373       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:44:49.403395       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:44:49.403399       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:44:49.404117       1 config.go:309] "Starting node config controller"
	I1123 08:44:49.404152       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:44:49.404159       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:44:49.494235       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:44:49.503687       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:44:49.503723       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3922d3ac1a3fa33fc277f69cf60fea88cb74510306d065fff3aedfcea5e11cd5] <==
	E1123 08:44:39.210367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:44:39.210412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:44:39.210470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:44:39.210512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:44:39.210555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:44:39.210598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:44:39.210710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:44:39.210768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:44:39.210808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:44:39.210847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:44:39.211010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:44:39.211206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 08:44:40.072950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:44:40.086703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:44:40.098489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 08:44:40.146816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:44:40.249718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:44:40.320045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:44:40.320358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:44:40.353714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:44:40.365398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:44:40.388090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:44:40.452755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:44:40.473656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1123 08:44:41.881850       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: I1123 08:44:47.027220    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x55q\" (UniqueName: \"kubernetes.io/projected/a70ddc44-854e-4253-aa99-0bd199e34d0e-kube-api-access-5x55q\") pod \"kube-proxy-sq84q\" (UID: \"a70ddc44-854e-4253-aa99-0bd199e34d0e\") " pod="kube-system/kube-proxy-sq84q"
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: I1123 08:44:47.027281    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1e512ae4-2f16-4e9d-898a-51c754a6d8d7-cni-cfg\") pod \"kindnet-68b4f\" (UID: \"1e512ae4-2f16-4e9d-898a-51c754a6d8d7\") " pod="kube-system/kindnet-68b4f"
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: I1123 08:44:47.027301    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a70ddc44-854e-4253-aa99-0bd199e34d0e-lib-modules\") pod \"kube-proxy-sq84q\" (UID: \"a70ddc44-854e-4253-aa99-0bd199e34d0e\") " pod="kube-system/kube-proxy-sq84q"
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: I1123 08:44:47.027352    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e512ae4-2f16-4e9d-898a-51c754a6d8d7-lib-modules\") pod \"kindnet-68b4f\" (UID: \"1e512ae4-2f16-4e9d-898a-51c754a6d8d7\") " pod="kube-system/kindnet-68b4f"
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: I1123 08:44:47.027369    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k74k7\" (UniqueName: \"kubernetes.io/projected/1e512ae4-2f16-4e9d-898a-51c754a6d8d7-kube-api-access-k74k7\") pod \"kindnet-68b4f\" (UID: \"1e512ae4-2f16-4e9d-898a-51c754a6d8d7\") " pod="kube-system/kindnet-68b4f"
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: I1123 08:44:47.027434    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e512ae4-2f16-4e9d-898a-51c754a6d8d7-xtables-lock\") pod \"kindnet-68b4f\" (UID: \"1e512ae4-2f16-4e9d-898a-51c754a6d8d7\") " pod="kube-system/kindnet-68b4f"
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: I1123 08:44:47.027452    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a70ddc44-854e-4253-aa99-0bd199e34d0e-kube-proxy\") pod \"kube-proxy-sq84q\" (UID: \"a70ddc44-854e-4253-aa99-0bd199e34d0e\") " pod="kube-system/kube-proxy-sq84q"
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: E1123 08:44:47.403554    2122 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: E1123 08:44:47.403587    2122 projected.go:196] Error preparing data for projected volume kube-api-access-5x55q for pod kube-system/kube-proxy-sq84q: configmap "kube-root-ca.crt" not found
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: E1123 08:44:47.403689    2122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a70ddc44-854e-4253-aa99-0bd199e34d0e-kube-api-access-5x55q podName:a70ddc44-854e-4253-aa99-0bd199e34d0e nodeName:}" failed. No retries permitted until 2025-11-23 08:44:47.903664457 +0000 UTC m=+5.927624652 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5x55q" (UniqueName: "kubernetes.io/projected/a70ddc44-854e-4253-aa99-0bd199e34d0e-kube-api-access-5x55q") pod "kube-proxy-sq84q" (UID: "a70ddc44-854e-4253-aa99-0bd199e34d0e") : configmap "kube-root-ca.crt" not found
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: E1123 08:44:47.403909    2122 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: E1123 08:44:47.403921    2122 projected.go:196] Error preparing data for projected volume kube-api-access-k74k7 for pod kube-system/kindnet-68b4f: configmap "kube-root-ca.crt" not found
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: E1123 08:44:47.403963    2122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1e512ae4-2f16-4e9d-898a-51c754a6d8d7-kube-api-access-k74k7 podName:1e512ae4-2f16-4e9d-898a-51c754a6d8d7 nodeName:}" failed. No retries permitted until 2025-11-23 08:44:47.903951132 +0000 UTC m=+5.927911327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k74k7" (UniqueName: "kubernetes.io/projected/1e512ae4-2f16-4e9d-898a-51c754a6d8d7-kube-api-access-k74k7") pod "kindnet-68b4f" (UID: "1e512ae4-2f16-4e9d-898a-51c754a6d8d7") : configmap "kube-root-ca.crt" not found
	Nov 23 08:44:47 no-preload-596617 kubelet[2122]: I1123 08:44:47.953125    2122 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 08:44:51 no-preload-596617 kubelet[2122]: I1123 08:44:51.526335    2122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-68b4f" podStartSLOduration=3.494759238 podStartE2EDuration="5.526315344s" podCreationTimestamp="2025-11-23 08:44:46 +0000 UTC" firstStartedPulling="2025-11-23 08:44:48.655410264 +0000 UTC m=+6.679370451" lastFinishedPulling="2025-11-23 08:44:50.68696637 +0000 UTC m=+8.710926557" observedRunningTime="2025-11-23 08:44:51.52594592 +0000 UTC m=+9.549906123" watchObservedRunningTime="2025-11-23 08:44:51.526315344 +0000 UTC m=+9.550275539"
	Nov 23 08:44:51 no-preload-596617 kubelet[2122]: I1123 08:44:51.526455    2122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sq84q" podStartSLOduration=5.526449828 podStartE2EDuration="5.526449828s" podCreationTimestamp="2025-11-23 08:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:49.511706429 +0000 UTC m=+7.535666624" watchObservedRunningTime="2025-11-23 08:44:51.526449828 +0000 UTC m=+9.550410023"
	Nov 23 08:45:01 no-preload-596617 kubelet[2122]: I1123 08:45:01.196886    2122 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:45:01 no-preload-596617 kubelet[2122]: I1123 08:45:01.278906    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bbf4fd29-62c7-49d8-b210-930c2bd6c7b4-tmp\") pod \"storage-provisioner\" (UID: \"bbf4fd29-62c7-49d8-b210-930c2bd6c7b4\") " pod="kube-system/storage-provisioner"
	Nov 23 08:45:01 no-preload-596617 kubelet[2122]: I1123 08:45:01.279150    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx4m6\" (UniqueName: \"kubernetes.io/projected/7d69a45e-abdd-4480-8b79-7bb112b3eb7f-kube-api-access-hx4m6\") pod \"coredns-66bc5c9577-spk2c\" (UID: \"7d69a45e-abdd-4480-8b79-7bb112b3eb7f\") " pod="kube-system/coredns-66bc5c9577-spk2c"
	Nov 23 08:45:01 no-preload-596617 kubelet[2122]: I1123 08:45:01.279267    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5kf6\" (UniqueName: \"kubernetes.io/projected/bbf4fd29-62c7-49d8-b210-930c2bd6c7b4-kube-api-access-x5kf6\") pod \"storage-provisioner\" (UID: \"bbf4fd29-62c7-49d8-b210-930c2bd6c7b4\") " pod="kube-system/storage-provisioner"
	Nov 23 08:45:01 no-preload-596617 kubelet[2122]: I1123 08:45:01.279378    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d69a45e-abdd-4480-8b79-7bb112b3eb7f-config-volume\") pod \"coredns-66bc5c9577-spk2c\" (UID: \"7d69a45e-abdd-4480-8b79-7bb112b3eb7f\") " pod="kube-system/coredns-66bc5c9577-spk2c"
	Nov 23 08:45:02 no-preload-596617 kubelet[2122]: I1123 08:45:02.611815    2122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-spk2c" podStartSLOduration=15.611795453 podStartE2EDuration="15.611795453s" podCreationTimestamp="2025-11-23 08:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:02.586634267 +0000 UTC m=+20.610594471" watchObservedRunningTime="2025-11-23 08:45:02.611795453 +0000 UTC m=+20.635755640"
	Nov 23 08:45:04 no-preload-596617 kubelet[2122]: I1123 08:45:04.889166    2122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.889135369 podStartE2EDuration="15.889135369s" podCreationTimestamp="2025-11-23 08:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:02.641611474 +0000 UTC m=+20.665571669" watchObservedRunningTime="2025-11-23 08:45:04.889135369 +0000 UTC m=+22.913095564"
	Nov 23 08:45:04 no-preload-596617 kubelet[2122]: I1123 08:45:04.902542    2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfk2p\" (UniqueName: \"kubernetes.io/projected/9b93317d-72f3-440c-9896-cb6d0b98f255-kube-api-access-qfk2p\") pod \"busybox\" (UID: \"9b93317d-72f3-440c-9896-cb6d0b98f255\") " pod="default/busybox"
	Nov 23 08:45:13 no-preload-596617 kubelet[2122]: E1123 08:45:13.039046    2122 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 192.168.85.2:41256->192.168.85.2:10010: read tcp 192.168.85.2:41256->192.168.85.2:10010: read: connection reset by peer
	
	
	==> storage-provisioner [91c445761c11225f240bc25605c50446bcaa23a89a3ee6c7f275c64941c44788] <==
	I1123 08:45:01.949065       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:45:02.057538       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:45:02.057595       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:45:02.060413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:02.074470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:02.074917       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:45:02.075272       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-596617_bffee8a0-c5ce-4f43-b168-186013674e96!
	I1123 08:45:02.076532       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"36ec1114-3ceb-4c05-ab16-32b7af61b9eb", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-596617_bffee8a0-c5ce-4f43-b168-186013674e96 became leader
	W1123 08:45:02.079000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:02.087962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:02.175694       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-596617_bffee8a0-c5ce-4f43-b168-186013674e96!
	W1123 08:45:04.090654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:04.095688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:06.099700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:06.104565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:08.107676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:08.115970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:10.127027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:10.132118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:12.136068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:12.143289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:14.147252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:14.152648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:16.155949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:16.163154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-596617 -n no-preload-596617
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-596617 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (12.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (15.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-230843 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [447e0831-d5fa-46df-8ee0-a7779b02f544] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [447e0831-d5fa-46df-8ee0-a7779b02f544] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00454061s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-230843 exec busybox -- /bin/sh -c "ulimit -n"
E1123 08:45:40.260072    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-230843
helpers_test.go:243: (dbg) docker inspect embed-certs-230843:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2bfccd8525daf7ee5a22777f7928a1a9173705a9fb0001a32ade15b2cff1df2d",
	        "Created": "2025-11-23T08:44:06.268139127Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 208950,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:44:06.366094661Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/2bfccd8525daf7ee5a22777f7928a1a9173705a9fb0001a32ade15b2cff1df2d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2bfccd8525daf7ee5a22777f7928a1a9173705a9fb0001a32ade15b2cff1df2d/hostname",
	        "HostsPath": "/var/lib/docker/containers/2bfccd8525daf7ee5a22777f7928a1a9173705a9fb0001a32ade15b2cff1df2d/hosts",
	        "LogPath": "/var/lib/docker/containers/2bfccd8525daf7ee5a22777f7928a1a9173705a9fb0001a32ade15b2cff1df2d/2bfccd8525daf7ee5a22777f7928a1a9173705a9fb0001a32ade15b2cff1df2d-json.log",
	        "Name": "/embed-certs-230843",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-230843:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-230843",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2bfccd8525daf7ee5a22777f7928a1a9173705a9fb0001a32ade15b2cff1df2d",
	                "LowerDir": "/var/lib/docker/overlay2/03da6dc613c640352465e3da485494ade322d3cb48714cbb034c323f83515bc9-init/diff:/var/lib/docker/overlay2/88c30082a717909d357f7d81c88a05ce3487a40d372ee6dc57fb9f012e0502da/diff",
	                "MergedDir": "/var/lib/docker/overlay2/03da6dc613c640352465e3da485494ade322d3cb48714cbb034c323f83515bc9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/03da6dc613c640352465e3da485494ade322d3cb48714cbb034c323f83515bc9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/03da6dc613c640352465e3da485494ade322d3cb48714cbb034c323f83515bc9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-230843",
	                "Source": "/var/lib/docker/volumes/embed-certs-230843/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-230843",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-230843",
	                "name.minikube.sigs.k8s.io": "embed-certs-230843",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8cb50b174d359b974409dc15124d96c77d2d9296c810302404b3a338d79265ab",
	            "SandboxKey": "/var/run/docker/netns/8cb50b174d35",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-230843": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:e4:32:dd:dd:fa",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f76841edf3bfad9bd4d843371f12a3ed6e6a38d046c709f759987c025cf92ee5",
	                    "EndpointID": "d182c06ed275adabad64d5c95f5f59d6f899b4c4259a5f3e561782a929cbd861",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-230843",
	                        "2bfccd8525da"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-230843 -n embed-certs-230843
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-230843 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-230843 logs -n 25: (2.018233759s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ start   │ -p cert-expiration-119748 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-119748   │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │ 23 Nov 25 08:40 UTC │
	│ ssh     │ force-systemd-env-760522 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-760522 │ jenkins │ v1.37.0 │ 23 Nov 25 08:40 UTC │ 23 Nov 25 08:40 UTC │
	│ delete  │ -p force-systemd-env-760522                                                                                                                                                                                                                         │ force-systemd-env-760522 │ jenkins │ v1.37.0 │ 23 Nov 25 08:40 UTC │ 23 Nov 25 08:40 UTC │
	│ start   │ -p cert-options-106536 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:40 UTC │ 23 Nov 25 08:41 UTC │
	│ ssh     │ cert-options-106536 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ ssh     │ -p cert-options-106536 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ delete  │ -p cert-options-106536                                                                                                                                                                                                                              │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ start   │ -p old-k8s-version-180638 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:42 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-180638 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ stop    │ -p old-k8s-version-180638 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-180638 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ start   │ -p old-k8s-version-180638 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:43 UTC │
	│ start   │ -p cert-expiration-119748 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-119748   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p cert-expiration-119748                                                                                                                                                                                                                           │ cert-expiration-119748   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ start   │ -p no-preload-596617 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-596617        │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ old-k8s-version-180638 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ pause   │ -p old-k8s-version-180638 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ unpause │ -p old-k8s-version-180638 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p old-k8s-version-180638                                                                                                                                                                                                                           │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p old-k8s-version-180638                                                                                                                                                                                                                           │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ start   │ -p embed-certs-230843 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-230843       │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:45 UTC │
	│ addons  │ enable metrics-server -p no-preload-596617 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-596617        │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ stop    │ -p no-preload-596617 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-596617        │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ addons  │ enable dashboard -p no-preload-596617 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-596617        │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p no-preload-596617 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-596617        │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:45:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:45:31.081446  214489 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:45:31.081602  214489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:31.081613  214489 out.go:374] Setting ErrFile to fd 2...
	I1123 08:45:31.081619  214489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:31.081935  214489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
	I1123 08:45:31.082310  214489 out.go:368] Setting JSON to false
	I1123 08:45:31.083414  214489 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5280,"bootTime":1763882251,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 08:45:31.083487  214489 start.go:143] virtualization:  
	I1123 08:45:31.087654  214489 out.go:179] * [no-preload-596617] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:45:31.090895  214489 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:45:31.090958  214489 notify.go:221] Checking for updates...
	I1123 08:45:31.098009  214489 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:45:31.100867  214489 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:45:31.103899  214489 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube
	I1123 08:45:31.106844  214489 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:45:31.109858  214489 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:45:31.113540  214489 config.go:182] Loaded profile config "no-preload-596617": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:45:31.114187  214489 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:45:31.154610  214489 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:45:31.154736  214489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:31.252722  214489 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:45:31.242676492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:45:31.252826  214489 docker.go:319] overlay module found
	I1123 08:45:31.256042  214489 out.go:179] * Using the docker driver based on existing profile
	I1123 08:45:31.258921  214489 start.go:309] selected driver: docker
	I1123 08:45:31.258940  214489 start.go:927] validating driver "docker" against &{Name:no-preload-596617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-596617 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:45:31.259060  214489 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:45:31.259770  214489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:31.319215  214489 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:45:31.304179369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:45:31.319540  214489 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:31.319573  214489 cni.go:84] Creating CNI manager for ""
	I1123 08:45:31.319631  214489 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:45:31.319684  214489 start.go:353] cluster config:
	{Name:no-preload-596617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-596617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:45:31.324896  214489 out.go:179] * Starting "no-preload-596617" primary control-plane node in "no-preload-596617" cluster
	I1123 08:45:31.327840  214489 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:45:31.330795  214489 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:45:31.333644  214489 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:45:31.333719  214489 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:45:31.333781  214489 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/config.json ...
	I1123 08:45:31.334083  214489 cache.go:107] acquiring lock: {Name:mk1e3231b750d1ca9ca2b3f99138ed3e9903a4d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:31.334168  214489 cache.go:115] /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1123 08:45:31.334177  214489 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 102.771µs
	I1123 08:45:31.334188  214489 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1123 08:45:31.334199  214489 cache.go:107] acquiring lock: {Name:mke9f25b0047751d431afe9e21a9064e6097bf6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:31.334232  214489 cache.go:115] /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1123 08:45:31.334237  214489 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 39.098µs
	I1123 08:45:31.334243  214489 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1123 08:45:31.334252  214489 cache.go:107] acquiring lock: {Name:mk809d72912357ebab68612e28a7a4618f1d6a79 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:31.334278  214489 cache.go:115] /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1123 08:45:31.334283  214489 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 32.403µs
	I1123 08:45:31.334289  214489 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1123 08:45:31.334298  214489 cache.go:107] acquiring lock: {Name:mkd95a0f707394665885d4a227157bc117b65e9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:31.334322  214489 cache.go:115] /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1123 08:45:31.334327  214489 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 30.663µs
	I1123 08:45:31.334332  214489 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1123 08:45:31.334341  214489 cache.go:107] acquiring lock: {Name:mkbfec764792a8eb5c7ecc45cc698ede09834e24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:31.334340  214489 cache.go:107] acquiring lock: {Name:mk5de408a9d67352662d4ae7d2550befae22c8a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:31.334373  214489 cache.go:107] acquiring lock: {Name:mk50d60d7163508be3f18275abd2da533f83f001 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:31.334402  214489 cache.go:115] /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1123 08:45:31.334407  214489 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 34.79µs
	I1123 08:45:31.334411  214489 cache.go:115] /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1123 08:45:31.334418  214489 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1123 08:45:31.334421  214489 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 88.625µs
	I1123 08:45:31.334429  214489 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1123 08:45:31.334366  214489 cache.go:115] /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1123 08:45:31.334430  214489 cache.go:107] acquiring lock: {Name:mka80ab06b467377886e371915ae350b5918a590 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:31.334440  214489 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 99.767µs
	I1123 08:45:31.334446  214489 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1123 08:45:31.334457  214489 cache.go:115] /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1123 08:45:31.334462  214489 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 32.755µs
	I1123 08:45:31.334467  214489 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1123 08:45:31.334473  214489 cache.go:87] Successfully saved all images to host disk.
	I1123 08:45:31.353746  214489 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:45:31.353768  214489 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:45:31.353790  214489 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:45:31.353825  214489 start.go:360] acquireMachinesLock for no-preload-596617: {Name:mkf8b1df8f307f4f80d1d148c731210823cc9721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:31.353897  214489 start.go:364] duration metric: took 57.223µs to acquireMachinesLock for "no-preload-596617"
	I1123 08:45:31.353919  214489 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:45:31.353924  214489 fix.go:54] fixHost starting: 
	I1123 08:45:31.354183  214489 cli_runner.go:164] Run: docker container inspect no-preload-596617 --format={{.State.Status}}
	I1123 08:45:31.372423  214489 fix.go:112] recreateIfNeeded on no-preload-596617: state=Stopped err=<nil>
	W1123 08:45:31.372454  214489 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 08:45:31.375669  214489 out.go:252] * Restarting existing docker container for "no-preload-596617" ...
	I1123 08:45:31.375806  214489 cli_runner.go:164] Run: docker start no-preload-596617
	I1123 08:45:31.639329  214489 cli_runner.go:164] Run: docker container inspect no-preload-596617 --format={{.State.Status}}
	I1123 08:45:31.662184  214489 kic.go:430] container "no-preload-596617" state is running.
	I1123 08:45:31.662572  214489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-596617
	I1123 08:45:31.686218  214489 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/config.json ...
	I1123 08:45:31.686507  214489 machine.go:94] provisionDockerMachine start ...
	I1123 08:45:31.686596  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:31.712297  214489 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:31.712736  214489 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1123 08:45:31.712750  214489 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:45:31.713530  214489 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37098->127.0.0.1:33073: read: connection reset by peer
	I1123 08:45:34.869009  214489 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-596617
	
	I1123 08:45:34.869041  214489 ubuntu.go:182] provisioning hostname "no-preload-596617"
	I1123 08:45:34.869104  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:34.888099  214489 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:34.888410  214489 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1123 08:45:34.888421  214489 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-596617 && echo "no-preload-596617" | sudo tee /etc/hostname
	I1123 08:45:35.053879  214489 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-596617
	
	I1123 08:45:35.054070  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:35.073209  214489 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:35.073619  214489 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1123 08:45:35.073647  214489 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-596617' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-596617/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-596617' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:45:35.229890  214489 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:45:35.229960  214489 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-2339/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-2339/.minikube}
	I1123 08:45:35.230028  214489 ubuntu.go:190] setting up certificates
	I1123 08:45:35.230060  214489 provision.go:84] configureAuth start
	I1123 08:45:35.230147  214489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-596617
	I1123 08:45:35.248479  214489 provision.go:143] copyHostCerts
	I1123 08:45:35.248564  214489 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem, removing ...
	I1123 08:45:35.248582  214489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem
	I1123 08:45:35.248663  214489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem (1123 bytes)
	I1123 08:45:35.248766  214489 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem, removing ...
	I1123 08:45:35.248777  214489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem
	I1123 08:45:35.248808  214489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem (1675 bytes)
	I1123 08:45:35.248872  214489 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem, removing ...
	I1123 08:45:35.248882  214489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem
	I1123 08:45:35.248909  214489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem (1078 bytes)
	I1123 08:45:35.248961  214489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem org=jenkins.no-preload-596617 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-596617]
	I1123 08:45:35.850806  214489 provision.go:177] copyRemoteCerts
	I1123 08:45:35.850875  214489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:45:35.850917  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:35.869201  214489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:45:35.979046  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:45:35.999654  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 08:45:36.025274  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:45:36.045708  214489 provision.go:87] duration metric: took 815.610253ms to configureAuth
	I1123 08:45:36.045779  214489 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:45:36.046004  214489 config.go:182] Loaded profile config "no-preload-596617": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:45:36.046035  214489 machine.go:97] duration metric: took 4.359518543s to provisionDockerMachine
	I1123 08:45:36.046044  214489 start.go:293] postStartSetup for "no-preload-596617" (driver="docker")
	I1123 08:45:36.046055  214489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:45:36.046105  214489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:45:36.046150  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:36.063725  214489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:45:36.169613  214489 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:45:36.172944  214489 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:45:36.172973  214489 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:45:36.172986  214489 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-2339/.minikube/addons for local assets ...
	I1123 08:45:36.173040  214489 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-2339/.minikube/files for local assets ...
	I1123 08:45:36.173117  214489 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem -> 41512.pem in /etc/ssl/certs
	I1123 08:45:36.173237  214489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:45:36.180871  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem --> /etc/ssl/certs/41512.pem (1708 bytes)
	I1123 08:45:36.200612  214489 start.go:296] duration metric: took 154.553376ms for postStartSetup
	I1123 08:45:36.200694  214489 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:45:36.200741  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:36.217833  214489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:45:36.322374  214489 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:45:36.326930  214489 fix.go:56] duration metric: took 4.972999941s for fixHost
	I1123 08:45:36.326957  214489 start.go:83] releasing machines lock for "no-preload-596617", held for 4.973052224s
	I1123 08:45:36.327041  214489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-596617
	I1123 08:45:36.343318  214489 ssh_runner.go:195] Run: cat /version.json
	I1123 08:45:36.343366  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:36.343437  214489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:45:36.343499  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:36.361025  214489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:45:36.362515  214489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:45:36.466112  214489 ssh_runner.go:195] Run: systemctl --version
	I1123 08:45:36.595398  214489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:45:36.599765  214489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:45:36.599840  214489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:45:36.609111  214489 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:45:36.609177  214489 start.go:496] detecting cgroup driver to use...
	I1123 08:45:36.609212  214489 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:45:36.609275  214489 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:45:36.627766  214489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:45:36.641333  214489 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:45:36.641493  214489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:45:36.657217  214489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:45:36.669939  214489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:45:36.783552  214489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:45:36.909872  214489 docker.go:234] disabling docker service ...
	I1123 08:45:36.909988  214489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:45:36.925902  214489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:45:36.939655  214489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:45:37.081256  214489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:45:37.211762  214489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:45:37.224765  214489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:45:37.239027  214489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 08:45:37.248744  214489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:45:37.257550  214489 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 08:45:37.257655  214489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 08:45:37.266285  214489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:45:37.274831  214489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:45:37.287905  214489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:45:37.296301  214489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:45:37.304170  214489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:45:37.316409  214489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:45:37.325574  214489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:45:37.335262  214489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:45:37.342879  214489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:45:37.350627  214489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:37.480643  214489 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:45:37.652134  214489 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:45:37.652246  214489 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:45:37.656648  214489 start.go:564] Will wait 60s for crictl version
	I1123 08:45:37.656763  214489 ssh_runner.go:195] Run: which crictl
	I1123 08:45:37.660339  214489 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:45:37.690591  214489 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:45:37.690714  214489 ssh_runner.go:195] Run: containerd --version
	I1123 08:45:37.714789  214489 ssh_runner.go:195] Run: containerd --version
	I1123 08:45:37.737512  214489 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 08:45:37.740637  214489 cli_runner.go:164] Run: docker network inspect no-preload-596617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:45:37.756942  214489 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:45:37.760830  214489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:45:37.770599  214489 kubeadm.go:884] updating cluster {Name:no-preload-596617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-596617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:45:37.770724  214489 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:45:37.770782  214489 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:45:37.796463  214489 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:45:37.796491  214489 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:45:37.796499  214489 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1123 08:45:37.796597  214489 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-596617 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-596617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:45:37.796662  214489 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:45:37.824748  214489 cni.go:84] Creating CNI manager for ""
	I1123 08:45:37.824777  214489 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:45:37.824797  214489 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:45:37.824832  214489 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-596617 NodeName:no-preload-596617 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:45:37.824956  214489 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-596617"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:45:37.825034  214489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:45:37.833591  214489 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:45:37.833658  214489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:45:37.841227  214489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1123 08:45:37.855027  214489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:45:37.867614  214489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1123 08:45:37.880143  214489 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:45:37.883674  214489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:45:37.893519  214489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:38.018478  214489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:45:38.038155  214489 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617 for IP: 192.168.85.2
	I1123 08:45:38.038228  214489 certs.go:195] generating shared ca certs ...
	I1123 08:45:38.038259  214489 certs.go:227] acquiring lock for ca certs: {Name:mke0fc62f41acbef5eb3e84af3a3b8f9858bd1fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:38.038523  214489 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-2339/.minikube/ca.key
	I1123 08:45:38.038640  214489 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.key
	I1123 08:45:38.038669  214489 certs.go:257] generating profile certs ...
	I1123 08:45:38.038809  214489 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.key
	I1123 08:45:38.038978  214489 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.key.5887770e
	I1123 08:45:38.039116  214489 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.key
	I1123 08:45:38.039311  214489 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151.pem (1338 bytes)
	W1123 08:45:38.039390  214489 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151_empty.pem, impossibly tiny 0 bytes
	I1123 08:45:38.039422  214489 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 08:45:38.039499  214489 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:45:38.039568  214489 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:45:38.039634  214489 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem (1675 bytes)
	I1123 08:45:38.039736  214489 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem (1708 bytes)
	I1123 08:45:38.040620  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:45:38.064556  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:45:38.086079  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:45:38.107609  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:45:38.126755  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 08:45:38.150710  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 08:45:38.169348  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:45:38.203740  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 08:45:38.231914  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem --> /usr/share/ca-certificates/41512.pem (1708 bytes)
	I1123 08:45:38.262708  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:45:38.286526  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151.pem --> /usr/share/ca-certificates/4151.pem (1338 bytes)
	I1123 08:45:38.317892  214489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:45:38.334424  214489 ssh_runner.go:195] Run: openssl version
	I1123 08:45:38.342759  214489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:45:38.358198  214489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:38.362610  214489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:38.362675  214489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:38.406799  214489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:45:38.415497  214489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4151.pem && ln -fs /usr/share/ca-certificates/4151.pem /etc/ssl/certs/4151.pem"
	I1123 08:45:38.424328  214489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4151.pem
	I1123 08:45:38.428513  214489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:02 /usr/share/ca-certificates/4151.pem
	I1123 08:45:38.428580  214489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4151.pem
	I1123 08:45:38.469713  214489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4151.pem /etc/ssl/certs/51391683.0"
	I1123 08:45:38.480269  214489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41512.pem && ln -fs /usr/share/ca-certificates/41512.pem /etc/ssl/certs/41512.pem"
	I1123 08:45:38.489650  214489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41512.pem
	I1123 08:45:38.494474  214489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:02 /usr/share/ca-certificates/41512.pem
	I1123 08:45:38.494579  214489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41512.pem
	I1123 08:45:38.538710  214489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41512.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:45:38.546845  214489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:45:38.551545  214489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 08:45:38.598277  214489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 08:45:38.640067  214489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 08:45:38.683499  214489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 08:45:38.725672  214489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 08:45:38.781783  214489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 08:45:38.836583  214489 kubeadm.go:401] StartCluster: {Name:no-preload-596617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-596617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:45:38.836731  214489 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:45:38.836832  214489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:45:38.921526  214489 cri.go:89] found id: "688a87c4a6cfdbcdf8876e5686af2cb559d7878123fea8977dff105b58a52002"
	I1123 08:45:38.921602  214489 cri.go:89] found id: "91c445761c11225f240bc25605c50446bcaa23a89a3ee6c7f275c64941c44788"
	I1123 08:45:38.921622  214489 cri.go:89] found id: "4ff14e63674511be9833e17757d7ac8c83cf043c373fdfaeba96b335a278376f"
	I1123 08:45:38.921645  214489 cri.go:89] found id: "38a03d8690d80f6c742953b846418123550408e6b4fc3bc3ed61b8578754af02"
	I1123 08:45:38.921689  214489 cri.go:89] found id: "ae63305653ca8cbbd80c13dd0f9434bfc3feedc3bbff30a329f62b0559f2895a"
	I1123 08:45:38.921713  214489 cri.go:89] found id: "2e1de07c6493d308c8cde1bd08ad1af4bde14c9a11d6c18de05914a462d0021b"
	I1123 08:45:38.921733  214489 cri.go:89] found id: "3922d3ac1a3fa33fc277f69cf60fea88cb74510306d065fff3aedfcea5e11cd5"
	I1123 08:45:38.921767  214489 cri.go:89] found id: "0106e17e619c21c4f70b18b18b785e598009939262000db201255c4c23134bb6"
	I1123 08:45:38.921790  214489 cri.go:89] found id: ""
	I1123 08:45:38.921874  214489 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1123 08:45:38.959607  214489 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"430e1934b8fd510f1a03107e8e0611223df2cbe36f9ba543b1c3d0510efac384","pid":0,"status":"stopped","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/430e1934b8fd510f1a03107e8e0611223df2cbe36f9ba543b1c3d0510efac384","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/430e1934b8fd510f1a03107e8e0611223df2cbe36f9ba543b1c3d0510efac384/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"430e1934b8fd510f1a03107e8e0611223df2cbe36f9ba543b1c3d0510efac384","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-596617_95d6e38b29f32119fb34b2fd5647f69d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.
cri.sandbox-name":"etcd-no-preload-596617","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"95d6e38b29f32119fb34b2fd5647f69d"},"owner":"root"},{"ociVersion":"1.2.1","id":"6c9b0fcc2dce26957ba4fab0f2b4bf41d303b8dbb1c855117ec7e84e905ebb14","pid":0,"status":"stopped","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c9b0fcc2dce26957ba4fab0f2b4bf41d303b8dbb1c855117ec7e84e905ebb14","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c9b0fcc2dce26957ba4fab0f2b4bf41d303b8dbb1c855117ec7e84e905ebb14/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"6c9b0fcc2dce26957ba4fab0f2b4bf41d303b8dbb1c855117ec7e84e905ebb14","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kub
e-system_kube-apiserver-no-preload-596617_7fd9e0d712de079a2b15f8f5d509bcc6","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-596617","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7fd9e0d712de079a2b15f8f5d509bcc6"},"owner":"root"},{"ociVersion":"1.2.1","id":"e1c4187689b4c9800b33df6579fb8b148e2c86b39cac0b6eae4a6d75715d355c","pid":895,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1c4187689b4c9800b33df6579fb8b148e2c86b39cac0b6eae4a6d75715d355c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1c4187689b4c9800b33df6579fb8b148e2c86b39cac0b6eae4a6d75715d355c/rootfs","created":"2025-11-23T08:45:38.873599607Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.ku
bernetes.cri.sandbox-id":"e1c4187689b4c9800b33df6579fb8b148e2c86b39cac0b6eae4a6d75715d355c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-596617_1a4385a43578375499ace3c12875268a","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-596617","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1a4385a43578375499ace3c12875268a"},"owner":"root"},{"ociVersion":"1.2.1","id":"efde8f3e02fcc3b16164077e897393bd7dda5a45193f4d189fa7d2e5c562ff1a","pid":908,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/efde8f3e02fcc3b16164077e897393bd7dda5a45193f4d189fa7d2e5c562ff1a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/efde8f3e02fcc3b16164077e897393bd7dda5a45193f4d189fa7d2e5c562ff1a/rootfs","created":"2025-11-23T08:45:38.866757323Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pau
se:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"efde8f3e02fcc3b16164077e897393bd7dda5a45193f4d189fa7d2e5c562ff1a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-596617_0cce2b6da6fbc15cd83724928d768fdc","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-596617","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0cce2b6da6fbc15cd83724928d768fdc"},"owner":"root"}]
	I1123 08:45:38.959861  214489 cri.go:126] list returned 4 containers
	I1123 08:45:38.959900  214489 cri.go:129] container: {ID:430e1934b8fd510f1a03107e8e0611223df2cbe36f9ba543b1c3d0510efac384 Status:stopped}
	I1123 08:45:38.959935  214489 cri.go:131] skipping 430e1934b8fd510f1a03107e8e0611223df2cbe36f9ba543b1c3d0510efac384 - not in ps
	I1123 08:45:38.959978  214489 cri.go:129] container: {ID:6c9b0fcc2dce26957ba4fab0f2b4bf41d303b8dbb1c855117ec7e84e905ebb14 Status:stopped}
	I1123 08:45:38.960003  214489 cri.go:131] skipping 6c9b0fcc2dce26957ba4fab0f2b4bf41d303b8dbb1c855117ec7e84e905ebb14 - not in ps
	I1123 08:45:38.960043  214489 cri.go:129] container: {ID:e1c4187689b4c9800b33df6579fb8b148e2c86b39cac0b6eae4a6d75715d355c Status:created}
	I1123 08:45:38.960066  214489 cri.go:131] skipping e1c4187689b4c9800b33df6579fb8b148e2c86b39cac0b6eae4a6d75715d355c - not in ps
	I1123 08:45:38.960087  214489 cri.go:129] container: {ID:efde8f3e02fcc3b16164077e897393bd7dda5a45193f4d189fa7d2e5c562ff1a Status:created}
	I1123 08:45:38.960130  214489 cri.go:131] skipping efde8f3e02fcc3b16164077e897393bd7dda5a45193f4d189fa7d2e5c562ff1a - not in ps
	I1123 08:45:38.960223  214489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:45:38.975901  214489 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 08:45:38.975978  214489 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 08:45:38.976074  214489 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 08:45:38.991808  214489 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:45:38.992849  214489 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-596617" does not appear in /home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:45:38.993539  214489 kubeconfig.go:62] /home/jenkins/minikube-integration/21966-2339/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-596617" cluster setting kubeconfig missing "no-preload-596617" context setting]
	I1123 08:45:38.994458  214489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/kubeconfig: {Name:mka042f83263da2d190b70c2277735bf705fab5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:38.996453  214489 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 08:45:39.008852  214489 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 08:45:39.008949  214489 kubeadm.go:602] duration metric: took 32.951112ms to restartPrimaryControlPlane
	I1123 08:45:39.008972  214489 kubeadm.go:403] duration metric: took 172.411976ms to StartCluster
	I1123 08:45:39.009029  214489 settings.go:142] acquiring lock: {Name:mkfb77243b31dfe604b438e7da3f1bce2ba7b5a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:39.009138  214489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:45:39.010881  214489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/kubeconfig: {Name:mka042f83263da2d190b70c2277735bf705fab5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:39.011561  214489 config.go:182] Loaded profile config "no-preload-596617": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:45:39.011686  214489 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:45:39.011758  214489 addons.go:70] Setting storage-provisioner=true in profile "no-preload-596617"
	I1123 08:45:39.011779  214489 addons.go:239] Setting addon storage-provisioner=true in "no-preload-596617"
	W1123 08:45:39.011786  214489 addons.go:248] addon storage-provisioner should already be in state true
	I1123 08:45:39.011809  214489 host.go:66] Checking if "no-preload-596617" exists ...
	I1123 08:45:39.012299  214489 cli_runner.go:164] Run: docker container inspect no-preload-596617 --format={{.State.Status}}
	I1123 08:45:39.011649  214489 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:45:39.013730  214489 addons.go:70] Setting dashboard=true in profile "no-preload-596617"
	I1123 08:45:39.013752  214489 addons.go:239] Setting addon dashboard=true in "no-preload-596617"
	W1123 08:45:39.013760  214489 addons.go:248] addon dashboard should already be in state true
	I1123 08:45:39.013797  214489 host.go:66] Checking if "no-preload-596617" exists ...
	I1123 08:45:39.014264  214489 cli_runner.go:164] Run: docker container inspect no-preload-596617 --format={{.State.Status}}
	I1123 08:45:39.014546  214489 addons.go:70] Setting metrics-server=true in profile "no-preload-596617"
	I1123 08:45:39.014565  214489 addons.go:239] Setting addon metrics-server=true in "no-preload-596617"
	W1123 08:45:39.014572  214489 addons.go:248] addon metrics-server should already be in state true
	I1123 08:45:39.014613  214489 host.go:66] Checking if "no-preload-596617" exists ...
	I1123 08:45:39.015041  214489 cli_runner.go:164] Run: docker container inspect no-preload-596617 --format={{.State.Status}}
	I1123 08:45:39.016694  214489 addons.go:70] Setting default-storageclass=true in profile "no-preload-596617"
	I1123 08:45:39.016718  214489 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-596617"
	I1123 08:45:39.017015  214489 cli_runner.go:164] Run: docker container inspect no-preload-596617 --format={{.State.Status}}
	I1123 08:45:39.019827  214489 out.go:179] * Verifying Kubernetes components...
	I1123 08:45:39.023396  214489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:39.061751  214489 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:45:39.066392  214489 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:45:39.066426  214489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:45:39.066496  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:39.072884  214489 addons.go:239] Setting addon default-storageclass=true in "no-preload-596617"
	W1123 08:45:39.072907  214489 addons.go:248] addon default-storageclass should already be in state true
	I1123 08:45:39.074937  214489 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 08:45:39.077261  214489 host.go:66] Checking if "no-preload-596617" exists ...
	I1123 08:45:39.077773  214489 cli_runner.go:164] Run: docker container inspect no-preload-596617 --format={{.State.Status}}
	I1123 08:45:39.082511  214489 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 08:45:39.085511  214489 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 08:45:39.085535  214489 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 08:45:39.085610  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:39.109958  214489 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1123 08:45:39.116982  214489 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 08:45:39.117021  214489 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 08:45:39.117094  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:39.136342  214489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:45:39.146437  214489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:45:39.159911  214489 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:45:39.159933  214489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:45:39.160002  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:39.172966  214489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:45:39.196152  214489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:45:39.359016  214489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:45:39.402597  214489 node_ready.go:35] waiting up to 6m0s for node "no-preload-596617" to be "Ready" ...
	I1123 08:45:39.452427  214489 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 08:45:39.452489  214489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1123 08:45:39.501356  214489 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 08:45:39.501458  214489 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 08:45:39.557125  214489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:45:39.574251  214489 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 08:45:39.574320  214489 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 08:45:39.657370  214489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:45:39.716127  214489 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 08:45:39.716203  214489 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 08:45:39.727640  214489 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 08:45:39.727716  214489 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 08:45:39.918313  214489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 08:45:39.935742  214489 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 08:45:39.935764  214489 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 08:45:40.054502  214489 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 08:45:40.054522  214489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 08:45:40.256707  214489 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 08:45:40.256729  214489 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 08:45:40.410734  214489 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 08:45:40.410756  214489 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 08:45:40.658144  214489 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 08:45:40.658166  214489 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 08:45:40.690954  214489 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 08:45:40.691005  214489 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 08:45:40.737077  214489 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:45:40.737105  214489 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 08:45:40.778424  214489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	8c3a9b57aa3fa       1611cd07b61d5       7 seconds ago        Running             busybox                   0                   98fdfe5685049       busybox                                      default
	bc7e685270adf       138784d87c9c5       13 seconds ago       Running             coredns                   0                   fc302873a148b       coredns-66bc5c9577-64zf9                     kube-system
	1643cef73498b       ba04bb24b9575       13 seconds ago       Running             storage-provisioner       0                   3a58ba9e071e8       storage-provisioner                          kube-system
	5f6d1056bb18b       05baa95f5142d       55 seconds ago       Running             kube-proxy                0                   fc3f38c04d9c9       kube-proxy-7q2pg                             kube-system
	f0d0a156acdac       b1a8c6f707935       55 seconds ago       Running             kindnet-cni               0                   85d7f63e99ad9       kindnet-cvhwv                                kube-system
	7815de9f3375d       a1894772a478e       About a minute ago   Running             etcd                      0                   0c8a6c43c5d6d       etcd-embed-certs-230843                      kube-system
	37257eb77812e       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   01015a6393b7f       kube-apiserver-embed-certs-230843            kube-system
	9080da21cc845       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   047e03a9d72c8       kube-scheduler-embed-certs-230843            kube-system
	d7d27bf5ec2ff       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   8d94c6589312f       kube-controller-manager-embed-certs-230843   kube-system
	
	
	==> containerd <==
	Nov 23 08:45:28 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:28.273344624Z" level=info msg="connecting to shim 1643cef73498bee207cedab516c684b9aa70205b15932f5d3f7a3cf78cc833b5" address="unix:///run/containerd/s/3bab59b63c6b030cf9dd07f2c509f040e6b5b34c9fe1d9fc0c0e3b394e5055d2" protocol=ttrpc version=3
	Nov 23 08:45:28 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:28.325815126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-64zf9,Uid:b07768e4-8c90-4092-a257-3ec33d787231,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc302873a148b399d70c051330e0d1a33a5b343203c5607d9127d5f919c9df85\""
	Nov 23 08:45:28 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:28.333139402Z" level=info msg="CreateContainer within sandbox \"fc302873a148b399d70c051330e0d1a33a5b343203c5607d9127d5f919c9df85\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 23 08:45:28 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:28.344481098Z" level=info msg="Container bc7e685270adfd831a8ed08727e0e6b14d01b7f188c90b088cd17be67443ead3: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:45:28 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:28.362985648Z" level=info msg="CreateContainer within sandbox \"fc302873a148b399d70c051330e0d1a33a5b343203c5607d9127d5f919c9df85\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc7e685270adfd831a8ed08727e0e6b14d01b7f188c90b088cd17be67443ead3\""
	Nov 23 08:45:28 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:28.368970740Z" level=info msg="StartContainer for \"bc7e685270adfd831a8ed08727e0e6b14d01b7f188c90b088cd17be67443ead3\""
	Nov 23 08:45:28 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:28.370308783Z" level=info msg="connecting to shim bc7e685270adfd831a8ed08727e0e6b14d01b7f188c90b088cd17be67443ead3" address="unix:///run/containerd/s/4608ece2871b1abe57a3f54e57d31f97a2cf4975fdc53616d2e4041a0a884de5" protocol=ttrpc version=3
	Nov 23 08:45:28 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:28.381683809Z" level=info msg="StartContainer for \"1643cef73498bee207cedab516c684b9aa70205b15932f5d3f7a3cf78cc833b5\" returns successfully"
	Nov 23 08:45:28 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:28.455810770Z" level=info msg="StartContainer for \"bc7e685270adfd831a8ed08727e0e6b14d01b7f188c90b088cd17be67443ead3\" returns successfully"
	Nov 23 08:45:31 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:31.800427357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:447e0831-d5fa-46df-8ee0-a7779b02f544,Namespace:default,Attempt:0,}"
	Nov 23 08:45:31 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:31.862847316Z" level=info msg="connecting to shim 98fdfe56850498bfb6e52c2e7f6543f8d80d40adc465e964996dba87fcd3831c" address="unix:///run/containerd/s/487c356479455e6d25bb08223165e1b4d328530be47437424df978a630e4659d" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:45:31 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:31.989790760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:447e0831-d5fa-46df-8ee0-a7779b02f544,Namespace:default,Attempt:0,} returns sandbox id \"98fdfe56850498bfb6e52c2e7f6543f8d80d40adc465e964996dba87fcd3831c\""
	Nov 23 08:45:32 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:32.001508001Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.200029591Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.201881939Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937189"
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.204408788Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.208337116Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.208863851Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.204065668s"
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.208907453Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.215247963Z" level=info msg="CreateContainer within sandbox \"98fdfe56850498bfb6e52c2e7f6543f8d80d40adc465e964996dba87fcd3831c\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.228278484Z" level=info msg="Container 8c3a9b57aa3fa8a526c2fff7ea5fd3f15aebed1e992b56869329db637ea5318b: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.240217052Z" level=info msg="CreateContainer within sandbox \"98fdfe56850498bfb6e52c2e7f6543f8d80d40adc465e964996dba87fcd3831c\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"8c3a9b57aa3fa8a526c2fff7ea5fd3f15aebed1e992b56869329db637ea5318b\""
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.242490303Z" level=info msg="StartContainer for \"8c3a9b57aa3fa8a526c2fff7ea5fd3f15aebed1e992b56869329db637ea5318b\""
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.243461507Z" level=info msg="connecting to shim 8c3a9b57aa3fa8a526c2fff7ea5fd3f15aebed1e992b56869329db637ea5318b" address="unix:///run/containerd/s/487c356479455e6d25bb08223165e1b4d328530be47437424df978a630e4659d" protocol=ttrpc version=3
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.296359280Z" level=info msg="StartContainer for \"8c3a9b57aa3fa8a526c2fff7ea5fd3f15aebed1e992b56869329db637ea5318b\" returns successfully"
	
	
	==> coredns [bc7e685270adfd831a8ed08727e0e6b14d01b7f188c90b088cd17be67443ead3] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50180 - 21689 "HINFO IN 8371299108907945111.2069241919711160450. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02338457s
	
	
	==> describe nodes <==
	Name:               embed-certs-230843
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-230843
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=embed-certs-230843
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_44_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:44:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-230843
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:45:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:45:27 +0000   Sun, 23 Nov 2025 08:44:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:45:27 +0000   Sun, 23 Nov 2025 08:44:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:45:27 +0000   Sun, 23 Nov 2025 08:44:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:45:27 +0000   Sun, 23 Nov 2025 08:45:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-230843
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                ea90e4f5-4c64-4793-a3b1-1cc79e44f0f7
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-64zf9                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-230843                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-cvhwv                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-embed-certs-230843             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-embed-certs-230843    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-7q2pg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-embed-certs-230843             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 76s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 76s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  75s (x8 over 76s)  kubelet          Node embed-certs-230843 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     75s (x7 over 76s)  kubelet          Node embed-certs-230843 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    75s (x8 over 76s)  kubelet          Node embed-certs-230843 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  61s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node embed-certs-230843 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node embed-certs-230843 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node embed-certs-230843 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                node-controller  Node embed-certs-230843 event: Registered Node embed-certs-230843 in Controller
	  Normal   NodeReady                15s                kubelet          Node embed-certs-230843 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015154] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.511595] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034200] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753844] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.833249] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:37] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/22/fs': -2
	
	
	==> etcd [7815de9f3375dcddbbc8379dd5a00b505f4db9ba3ca59f52fa41a0f7bcce5fe9] <==
	{"level":"warn","ts":"2025-11-23T08:44:34.103722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.149958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.167825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.206703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.257722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.293336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.326678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.364373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.429199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.452495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.487005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.505791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.537468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.573360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.601546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.631938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.734116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.758815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.798901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.868800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.929780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.959844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.991712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:35.030515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:35.157910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55952","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:45:42 up  1:28,  0 user,  load average: 3.42, 3.75, 3.18
	Linux embed-certs-230843 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f0d0a156acdacc9d4d9949e49a4372f92290702298c3fdcd060a234f9be14c60] <==
	I1123 08:44:47.477042       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:44:47.477541       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 08:44:47.477752       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:44:47.477799       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:44:47.477813       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:44:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:44:47.772640       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:44:47.772690       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:44:47.772704       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:44:47.772869       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:45:17.772417       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 08:45:17.772777       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 08:45:17.772891       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 08:45:17.772959       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1123 08:45:18.872863       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:45:18.873059       1 metrics.go:72] Registering metrics
	I1123 08:45:18.873239       1 controller.go:711] "Syncing nftables rules"
	I1123 08:45:27.690746       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:45:27.690800       1 main.go:301] handling current node
	I1123 08:45:37.690718       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:45:37.690777       1 main.go:301] handling current node
	
	
	==> kube-apiserver [37257eb77812edf6e29e98549c15886ab92f60d6da37840c23393a2fcd8bce7a] <==
	I1123 08:44:37.140947       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:44:37.144487       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:44:37.210131       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 08:44:37.210522       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:37.211372       1 aggregator.go:171] initial CRD sync complete...
	I1123 08:44:37.217477       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 08:44:37.217700       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:44:37.217777       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:44:37.518668       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:44:37.548621       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:44:37.548655       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:44:39.378523       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:44:39.467323       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:44:39.642363       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:44:39.656761       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 08:44:39.658280       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:44:39.670732       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:44:39.818019       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:44:41.450504       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:44:41.472958       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:44:41.487231       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:44:45.663110       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 08:44:45.774054       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:45.816167       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:45.993275       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d7d27bf5ec2ffab09e4a0156bf3fb41c6d2e59dbf4c6daa9f64d25e8c5f183dc] <==
	I1123 08:44:44.809478       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:44:44.816655       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-230843" podCIDRs=["10.244.0.0/24"]
	I1123 08:44:44.809464       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 08:44:44.817586       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:44:44.818948       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:44:44.819595       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:44:44.821193       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-230843"
	I1123 08:44:44.821490       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 08:44:44.821375       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:44:44.824510       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:44:44.830822       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:44:44.831019       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:44:44.837164       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:44:44.841739       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 08:44:44.856183       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:44:44.859250       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:44:44.859492       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:44:44.859736       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 08:44:44.862882       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:44:44.863093       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:44:44.870163       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 08:44:44.886406       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:44:44.886661       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:44:44.886741       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:45:29.828111       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5f6d1056bb18bb56d51f45848340b4ebd99d67eaa5c4ffd79c9af9b2446b8dbe] <==
	I1123 08:44:47.652096       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:44:47.753587       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:44:47.956621       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:44:47.956661       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 08:44:47.956771       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:44:48.047587       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:44:48.047653       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:44:48.061332       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:44:48.061681       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:44:48.061699       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:44:48.072963       1 config.go:200] "Starting service config controller"
	I1123 08:44:48.072984       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:44:48.073015       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:44:48.073020       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:44:48.073033       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:44:48.073037       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:44:48.094027       1 config.go:309] "Starting node config controller"
	I1123 08:44:48.094047       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:44:48.094055       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:44:48.174698       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:44:48.174741       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:44:48.174783       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9080da21cc8455a9549f1206f0f811a839f7627a0c2fc95eaa26193364f5ab2a] <==
	I1123 08:44:36.150358       1 serving.go:386] Generated self-signed cert in-memory
	I1123 08:44:40.586713       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:44:40.595078       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:44:40.606332       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:44:40.606627       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:44:40.606891       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:44:40.606581       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 08:44:40.607255       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 08:44:40.606644       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:44:40.614831       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:44:40.606658       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:44:40.707170       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:44:40.710120       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 08:44:40.715601       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:44:42 embed-certs-230843 kubelet[1471]: I1123 08:44:42.758167    1471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-230843" podStartSLOduration=1.758149656 podStartE2EDuration="1.758149656s" podCreationTimestamp="2025-11-23 08:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:42.738666636 +0000 UTC m=+1.380350377" watchObservedRunningTime="2025-11-23 08:44:42.758149656 +0000 UTC m=+1.399833429"
	Nov 23 08:44:42 embed-certs-230843 kubelet[1471]: I1123 08:44:42.772858    1471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-230843" podStartSLOduration=1.772746656 podStartE2EDuration="1.772746656s" podCreationTimestamp="2025-11-23 08:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:42.75805221 +0000 UTC m=+1.399735959" watchObservedRunningTime="2025-11-23 08:44:42.772746656 +0000 UTC m=+1.414430405"
	Nov 23 08:44:42 embed-certs-230843 kubelet[1471]: I1123 08:44:42.773314    1471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-230843" podStartSLOduration=1.773303775 podStartE2EDuration="1.773303775s" podCreationTimestamp="2025-11-23 08:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:42.773065348 +0000 UTC m=+1.414749097" watchObservedRunningTime="2025-11-23 08:44:42.773303775 +0000 UTC m=+1.414987524"
	Nov 23 08:44:43 embed-certs-230843 kubelet[1471]: I1123 08:44:43.602993    1471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-230843" podStartSLOduration=3.602957066 podStartE2EDuration="3.602957066s" podCreationTimestamp="2025-11-23 08:44:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:42.789700594 +0000 UTC m=+1.431384359" watchObservedRunningTime="2025-11-23 08:44:43.602957066 +0000 UTC m=+2.244640815"
	Nov 23 08:44:44 embed-certs-230843 kubelet[1471]: I1123 08:44:44.910965    1471 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:44:44 embed-certs-230843 kubelet[1471]: I1123 08:44:44.911567    1471 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:44:46 embed-certs-230843 kubelet[1471]: I1123 08:44:46.040099    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fe21e2f-c557-4c67-940e-de5d501ffa9b-xtables-lock\") pod \"kindnet-cvhwv\" (UID: \"4fe21e2f-c557-4c67-940e-de5d501ffa9b\") " pod="kube-system/kindnet-cvhwv"
	Nov 23 08:44:46 embed-certs-230843 kubelet[1471]: I1123 08:44:46.040146    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxnvx\" (UniqueName: \"kubernetes.io/projected/4fe21e2f-c557-4c67-940e-de5d501ffa9b-kube-api-access-qxnvx\") pod \"kindnet-cvhwv\" (UID: \"4fe21e2f-c557-4c67-940e-de5d501ffa9b\") " pod="kube-system/kindnet-cvhwv"
	Nov 23 08:44:46 embed-certs-230843 kubelet[1471]: I1123 08:44:46.040175    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0df10366-97bb-4703-9840-09bb1770a2ae-kube-proxy\") pod \"kube-proxy-7q2pg\" (UID: \"0df10366-97bb-4703-9840-09bb1770a2ae\") " pod="kube-system/kube-proxy-7q2pg"
	Nov 23 08:44:46 embed-certs-230843 kubelet[1471]: I1123 08:44:46.040199    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0df10366-97bb-4703-9840-09bb1770a2ae-xtables-lock\") pod \"kube-proxy-7q2pg\" (UID: \"0df10366-97bb-4703-9840-09bb1770a2ae\") " pod="kube-system/kube-proxy-7q2pg"
	Nov 23 08:44:46 embed-certs-230843 kubelet[1471]: I1123 08:44:46.040217    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46f85\" (UniqueName: \"kubernetes.io/projected/0df10366-97bb-4703-9840-09bb1770a2ae-kube-api-access-46f85\") pod \"kube-proxy-7q2pg\" (UID: \"0df10366-97bb-4703-9840-09bb1770a2ae\") " pod="kube-system/kube-proxy-7q2pg"
	Nov 23 08:44:46 embed-certs-230843 kubelet[1471]: I1123 08:44:46.040238    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0df10366-97bb-4703-9840-09bb1770a2ae-lib-modules\") pod \"kube-proxy-7q2pg\" (UID: \"0df10366-97bb-4703-9840-09bb1770a2ae\") " pod="kube-system/kube-proxy-7q2pg"
	Nov 23 08:44:46 embed-certs-230843 kubelet[1471]: I1123 08:44:46.040255    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4fe21e2f-c557-4c67-940e-de5d501ffa9b-cni-cfg\") pod \"kindnet-cvhwv\" (UID: \"4fe21e2f-c557-4c67-940e-de5d501ffa9b\") " pod="kube-system/kindnet-cvhwv"
	Nov 23 08:44:46 embed-certs-230843 kubelet[1471]: I1123 08:44:46.040283    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4fe21e2f-c557-4c67-940e-de5d501ffa9b-lib-modules\") pod \"kindnet-cvhwv\" (UID: \"4fe21e2f-c557-4c67-940e-de5d501ffa9b\") " pod="kube-system/kindnet-cvhwv"
	Nov 23 08:44:46 embed-certs-230843 kubelet[1471]: I1123 08:44:46.347789    1471 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 08:44:47 embed-certs-230843 kubelet[1471]: I1123 08:44:47.740931    1471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7q2pg" podStartSLOduration=2.740908816 podStartE2EDuration="2.740908816s" podCreationTimestamp="2025-11-23 08:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:47.740401364 +0000 UTC m=+6.382085113" watchObservedRunningTime="2025-11-23 08:44:47.740908816 +0000 UTC m=+6.382592557"
	Nov 23 08:44:47 embed-certs-230843 kubelet[1471]: I1123 08:44:47.808816    1471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cvhwv" podStartSLOduration=2.808799444 podStartE2EDuration="2.808799444s" podCreationTimestamp="2025-11-23 08:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:47.808645801 +0000 UTC m=+6.450329542" watchObservedRunningTime="2025-11-23 08:44:47.808799444 +0000 UTC m=+6.450483193"
	Nov 23 08:45:27 embed-certs-230843 kubelet[1471]: I1123 08:45:27.793231    1471 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:45:27 embed-certs-230843 kubelet[1471]: I1123 08:45:27.963256    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smzgx\" (UniqueName: \"kubernetes.io/projected/c026bb7f-0356-460c-beba-7e338e6406ec-kube-api-access-smzgx\") pod \"storage-provisioner\" (UID: \"c026bb7f-0356-460c-beba-7e338e6406ec\") " pod="kube-system/storage-provisioner"
	Nov 23 08:45:27 embed-certs-230843 kubelet[1471]: I1123 08:45:27.963317    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvdst\" (UniqueName: \"kubernetes.io/projected/b07768e4-8c90-4092-a257-3ec33d787231-kube-api-access-cvdst\") pod \"coredns-66bc5c9577-64zf9\" (UID: \"b07768e4-8c90-4092-a257-3ec33d787231\") " pod="kube-system/coredns-66bc5c9577-64zf9"
	Nov 23 08:45:27 embed-certs-230843 kubelet[1471]: I1123 08:45:27.963342    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c026bb7f-0356-460c-beba-7e338e6406ec-tmp\") pod \"storage-provisioner\" (UID: \"c026bb7f-0356-460c-beba-7e338e6406ec\") " pod="kube-system/storage-provisioner"
	Nov 23 08:45:27 embed-certs-230843 kubelet[1471]: I1123 08:45:27.963360    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b07768e4-8c90-4092-a257-3ec33d787231-config-volume\") pod \"coredns-66bc5c9577-64zf9\" (UID: \"b07768e4-8c90-4092-a257-3ec33d787231\") " pod="kube-system/coredns-66bc5c9577-64zf9"
	Nov 23 08:45:28 embed-certs-230843 kubelet[1471]: I1123 08:45:28.848172    1471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-64zf9" podStartSLOduration=42.84815334 podStartE2EDuration="42.84815334s" podCreationTimestamp="2025-11-23 08:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:28.831566156 +0000 UTC m=+47.473249905" watchObservedRunningTime="2025-11-23 08:45:28.84815334 +0000 UTC m=+47.489837081"
	Nov 23 08:45:28 embed-certs-230843 kubelet[1471]: I1123 08:45:28.865327    1471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.865307802 podStartE2EDuration="40.865307802s" podCreationTimestamp="2025-11-23 08:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:28.851006619 +0000 UTC m=+47.492690360" watchObservedRunningTime="2025-11-23 08:45:28.865307802 +0000 UTC m=+47.506991543"
	Nov 23 08:45:31 embed-certs-230843 kubelet[1471]: I1123 08:45:31.387954    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6bkm\" (UniqueName: \"kubernetes.io/projected/447e0831-d5fa-46df-8ee0-a7779b02f544-kube-api-access-m6bkm\") pod \"busybox\" (UID: \"447e0831-d5fa-46df-8ee0-a7779b02f544\") " pod="default/busybox"
	
	
	==> storage-provisioner [1643cef73498bee207cedab516c684b9aa70205b15932f5d3f7a3cf78cc833b5] <==
	I1123 08:45:28.377744       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:45:28.400154       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:45:28.400227       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:45:28.403145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:28.412955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:28.413125       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:45:28.415505       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e48fa202-bbe7-420b-9477-919a4bddc0d5", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-230843_7bf837ae-cbd5-4db1-b4ed-7ed965a3f9ec became leader
	I1123 08:45:28.415663       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-230843_7bf837ae-cbd5-4db1-b4ed-7ed965a3f9ec!
	W1123 08:45:28.429384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:28.435500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:28.516278       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-230843_7bf837ae-cbd5-4db1-b4ed-7ed965a3f9ec!
	W1123 08:45:30.439567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:30.445041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:32.448538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:32.453071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:34.455915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:34.460551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:36.464691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:36.471679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:38.475432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:38.483501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:40.492403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:40.511419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:42.516054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:42.542057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-230843 -n embed-certs-230843
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-230843 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-230843
helpers_test.go:243: (dbg) docker inspect embed-certs-230843:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2bfccd8525daf7ee5a22777f7928a1a9173705a9fb0001a32ade15b2cff1df2d",
	        "Created": "2025-11-23T08:44:06.268139127Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 208950,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:44:06.366094661Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/2bfccd8525daf7ee5a22777f7928a1a9173705a9fb0001a32ade15b2cff1df2d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2bfccd8525daf7ee5a22777f7928a1a9173705a9fb0001a32ade15b2cff1df2d/hostname",
	        "HostsPath": "/var/lib/docker/containers/2bfccd8525daf7ee5a22777f7928a1a9173705a9fb0001a32ade15b2cff1df2d/hosts",
	        "LogPath": "/var/lib/docker/containers/2bfccd8525daf7ee5a22777f7928a1a9173705a9fb0001a32ade15b2cff1df2d/2bfccd8525daf7ee5a22777f7928a1a9173705a9fb0001a32ade15b2cff1df2d-json.log",
	        "Name": "/embed-certs-230843",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-230843:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-230843",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2bfccd8525daf7ee5a22777f7928a1a9173705a9fb0001a32ade15b2cff1df2d",
	                "LowerDir": "/var/lib/docker/overlay2/03da6dc613c640352465e3da485494ade322d3cb48714cbb034c323f83515bc9-init/diff:/var/lib/docker/overlay2/88c30082a717909d357f7d81c88a05ce3487a40d372ee6dc57fb9f012e0502da/diff",
	                "MergedDir": "/var/lib/docker/overlay2/03da6dc613c640352465e3da485494ade322d3cb48714cbb034c323f83515bc9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/03da6dc613c640352465e3da485494ade322d3cb48714cbb034c323f83515bc9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/03da6dc613c640352465e3da485494ade322d3cb48714cbb034c323f83515bc9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-230843",
	                "Source": "/var/lib/docker/volumes/embed-certs-230843/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-230843",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-230843",
	                "name.minikube.sigs.k8s.io": "embed-certs-230843",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8cb50b174d359b974409dc15124d96c77d2d9296c810302404b3a338d79265ab",
	            "SandboxKey": "/var/run/docker/netns/8cb50b174d35",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-230843": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:e4:32:dd:dd:fa",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f76841edf3bfad9bd4d843371f12a3ed6e6a38d046c709f759987c025cf92ee5",
	                    "EndpointID": "d182c06ed275adabad64d5c95f5f59d6f899b4c4259a5f3e561782a929cbd861",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-230843",
	                        "2bfccd8525da"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-230843 -n embed-certs-230843
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-230843 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-230843 logs -n 25: (1.797132275s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ start   │ -p cert-expiration-119748 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-119748   │ jenkins │ v1.37.0 │ 23 Nov 25 08:39 UTC │ 23 Nov 25 08:40 UTC │
	│ ssh     │ force-systemd-env-760522 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-760522 │ jenkins │ v1.37.0 │ 23 Nov 25 08:40 UTC │ 23 Nov 25 08:40 UTC │
	│ delete  │ -p force-systemd-env-760522                                                                                                                                                                                                                         │ force-systemd-env-760522 │ jenkins │ v1.37.0 │ 23 Nov 25 08:40 UTC │ 23 Nov 25 08:40 UTC │
	│ start   │ -p cert-options-106536 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:40 UTC │ 23 Nov 25 08:41 UTC │
	│ ssh     │ cert-options-106536 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ ssh     │ -p cert-options-106536 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ delete  │ -p cert-options-106536                                                                                                                                                                                                                              │ cert-options-106536      │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ start   │ -p old-k8s-version-180638 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:42 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-180638 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ stop    │ -p old-k8s-version-180638 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-180638 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ start   │ -p old-k8s-version-180638 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:43 UTC │
	│ start   │ -p cert-expiration-119748 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-119748   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p cert-expiration-119748                                                                                                                                                                                                                           │ cert-expiration-119748   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ start   │ -p no-preload-596617 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-596617        │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ old-k8s-version-180638 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ pause   │ -p old-k8s-version-180638 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ unpause │ -p old-k8s-version-180638 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p old-k8s-version-180638                                                                                                                                                                                                                           │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p old-k8s-version-180638                                                                                                                                                                                                                           │ old-k8s-version-180638   │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ start   │ -p embed-certs-230843 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-230843       │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:45 UTC │
	│ addons  │ enable metrics-server -p no-preload-596617 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-596617        │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ stop    │ -p no-preload-596617 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-596617        │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ addons  │ enable dashboard -p no-preload-596617 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-596617        │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p no-preload-596617 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-596617        │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:45:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:45:31.081446  214489 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:45:31.081602  214489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:31.081613  214489 out.go:374] Setting ErrFile to fd 2...
	I1123 08:45:31.081619  214489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:31.081935  214489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
	I1123 08:45:31.082310  214489 out.go:368] Setting JSON to false
	I1123 08:45:31.083414  214489 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5280,"bootTime":1763882251,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 08:45:31.083487  214489 start.go:143] virtualization:  
	I1123 08:45:31.087654  214489 out.go:179] * [no-preload-596617] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:45:31.090895  214489 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:45:31.090958  214489 notify.go:221] Checking for updates...
	I1123 08:45:31.098009  214489 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:45:31.100867  214489 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:45:31.103899  214489 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube
	I1123 08:45:31.106844  214489 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:45:31.109858  214489 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:45:31.113540  214489 config.go:182] Loaded profile config "no-preload-596617": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:45:31.114187  214489 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:45:31.154610  214489 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:45:31.154736  214489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:31.252722  214489 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:45:31.242676492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:45:31.252826  214489 docker.go:319] overlay module found
	I1123 08:45:31.256042  214489 out.go:179] * Using the docker driver based on existing profile
	I1123 08:45:31.258921  214489 start.go:309] selected driver: docker
	I1123 08:45:31.258940  214489 start.go:927] validating driver "docker" against &{Name:no-preload-596617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-596617 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:45:31.259060  214489 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:45:31.259770  214489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:31.319215  214489 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:45:31.304179369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:45:31.319540  214489 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:31.319573  214489 cni.go:84] Creating CNI manager for ""
	I1123 08:45:31.319631  214489 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:45:31.319684  214489 start.go:353] cluster config:
	{Name:no-preload-596617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-596617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:45:31.324896  214489 out.go:179] * Starting "no-preload-596617" primary control-plane node in "no-preload-596617" cluster
	I1123 08:45:31.327840  214489 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:45:31.330795  214489 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:45:31.333644  214489 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:45:31.333719  214489 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:45:31.333781  214489 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/config.json ...
	I1123 08:45:31.334083  214489 cache.go:107] acquiring lock: {Name:mk1e3231b750d1ca9ca2b3f99138ed3e9903a4d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:31.334168  214489 cache.go:115] /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1123 08:45:31.334177  214489 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 102.771µs
	I1123 08:45:31.334188  214489 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1123 08:45:31.334199  214489 cache.go:107] acquiring lock: {Name:mke9f25b0047751d431afe9e21a9064e6097bf6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:31.334232  214489 cache.go:115] /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1123 08:45:31.334237  214489 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 39.098µs
	I1123 08:45:31.334243  214489 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1123 08:45:31.334252  214489 cache.go:107] acquiring lock: {Name:mk809d72912357ebab68612e28a7a4618f1d6a79 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:31.334278  214489 cache.go:115] /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1123 08:45:31.334283  214489 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 32.403µs
	I1123 08:45:31.334289  214489 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1123 08:45:31.334298  214489 cache.go:107] acquiring lock: {Name:mkd95a0f707394665885d4a227157bc117b65e9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:31.334322  214489 cache.go:115] /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1123 08:45:31.334327  214489 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 30.663µs
	I1123 08:45:31.334332  214489 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1123 08:45:31.334341  214489 cache.go:107] acquiring lock: {Name:mkbfec764792a8eb5c7ecc45cc698ede09834e24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:31.334340  214489 cache.go:107] acquiring lock: {Name:mk5de408a9d67352662d4ae7d2550befae22c8a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:31.334373  214489 cache.go:107] acquiring lock: {Name:mk50d60d7163508be3f18275abd2da533f83f001 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:31.334402  214489 cache.go:115] /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1123 08:45:31.334407  214489 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 34.79µs
	I1123 08:45:31.334411  214489 cache.go:115] /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1123 08:45:31.334418  214489 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1123 08:45:31.334421  214489 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 88.625µs
	I1123 08:45:31.334429  214489 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1123 08:45:31.334366  214489 cache.go:115] /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1123 08:45:31.334430  214489 cache.go:107] acquiring lock: {Name:mka80ab06b467377886e371915ae350b5918a590 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:31.334440  214489 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 99.767µs
	I1123 08:45:31.334446  214489 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1123 08:45:31.334457  214489 cache.go:115] /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1123 08:45:31.334462  214489 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 32.755µs
	I1123 08:45:31.334467  214489 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1123 08:45:31.334473  214489 cache.go:87] Successfully saved all images to host disk.
	I1123 08:45:31.353746  214489 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:45:31.353768  214489 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:45:31.353790  214489 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:45:31.353825  214489 start.go:360] acquireMachinesLock for no-preload-596617: {Name:mkf8b1df8f307f4f80d1d148c731210823cc9721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:31.353897  214489 start.go:364] duration metric: took 57.223µs to acquireMachinesLock for "no-preload-596617"
	I1123 08:45:31.353919  214489 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:45:31.353924  214489 fix.go:54] fixHost starting: 
	I1123 08:45:31.354183  214489 cli_runner.go:164] Run: docker container inspect no-preload-596617 --format={{.State.Status}}
	I1123 08:45:31.372423  214489 fix.go:112] recreateIfNeeded on no-preload-596617: state=Stopped err=<nil>
	W1123 08:45:31.372454  214489 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 08:45:31.375669  214489 out.go:252] * Restarting existing docker container for "no-preload-596617" ...
	I1123 08:45:31.375806  214489 cli_runner.go:164] Run: docker start no-preload-596617
	I1123 08:45:31.639329  214489 cli_runner.go:164] Run: docker container inspect no-preload-596617 --format={{.State.Status}}
	I1123 08:45:31.662184  214489 kic.go:430] container "no-preload-596617" state is running.
	I1123 08:45:31.662572  214489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-596617
	I1123 08:45:31.686218  214489 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/config.json ...
	I1123 08:45:31.686507  214489 machine.go:94] provisionDockerMachine start ...
	I1123 08:45:31.686596  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:31.712297  214489 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:31.712736  214489 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1123 08:45:31.712750  214489 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:45:31.713530  214489 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37098->127.0.0.1:33073: read: connection reset by peer
	I1123 08:45:34.869009  214489 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-596617
	
	I1123 08:45:34.869041  214489 ubuntu.go:182] provisioning hostname "no-preload-596617"
	I1123 08:45:34.869104  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:34.888099  214489 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:34.888410  214489 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1123 08:45:34.888421  214489 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-596617 && echo "no-preload-596617" | sudo tee /etc/hostname
	I1123 08:45:35.053879  214489 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-596617
	
	I1123 08:45:35.054070  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:35.073209  214489 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:35.073619  214489 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1123 08:45:35.073647  214489 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-596617' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-596617/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-596617' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:45:35.229890  214489 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:45:35.229960  214489 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-2339/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-2339/.minikube}
	I1123 08:45:35.230028  214489 ubuntu.go:190] setting up certificates
	I1123 08:45:35.230060  214489 provision.go:84] configureAuth start
	I1123 08:45:35.230147  214489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-596617
	I1123 08:45:35.248479  214489 provision.go:143] copyHostCerts
	I1123 08:45:35.248564  214489 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem, removing ...
	I1123 08:45:35.248582  214489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem
	I1123 08:45:35.248663  214489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem (1123 bytes)
	I1123 08:45:35.248766  214489 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem, removing ...
	I1123 08:45:35.248777  214489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem
	I1123 08:45:35.248808  214489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem (1675 bytes)
	I1123 08:45:35.248872  214489 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem, removing ...
	I1123 08:45:35.248882  214489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem
	I1123 08:45:35.248909  214489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem (1078 bytes)
	I1123 08:45:35.248961  214489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem org=jenkins.no-preload-596617 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-596617]
	I1123 08:45:35.850806  214489 provision.go:177] copyRemoteCerts
	I1123 08:45:35.850875  214489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:45:35.850917  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:35.869201  214489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:45:35.979046  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:45:35.999654  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 08:45:36.025274  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:45:36.045708  214489 provision.go:87] duration metric: took 815.610253ms to configureAuth
	I1123 08:45:36.045779  214489 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:45:36.046004  214489 config.go:182] Loaded profile config "no-preload-596617": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:45:36.046035  214489 machine.go:97] duration metric: took 4.359518543s to provisionDockerMachine
	I1123 08:45:36.046044  214489 start.go:293] postStartSetup for "no-preload-596617" (driver="docker")
	I1123 08:45:36.046055  214489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:45:36.046105  214489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:45:36.046150  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:36.063725  214489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:45:36.169613  214489 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:45:36.172944  214489 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:45:36.172973  214489 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:45:36.172986  214489 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-2339/.minikube/addons for local assets ...
	I1123 08:45:36.173040  214489 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-2339/.minikube/files for local assets ...
	I1123 08:45:36.173117  214489 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem -> 41512.pem in /etc/ssl/certs
	I1123 08:45:36.173237  214489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:45:36.180871  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem --> /etc/ssl/certs/41512.pem (1708 bytes)
	I1123 08:45:36.200612  214489 start.go:296] duration metric: took 154.553376ms for postStartSetup
	I1123 08:45:36.200694  214489 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:45:36.200741  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:36.217833  214489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:45:36.322374  214489 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:45:36.326930  214489 fix.go:56] duration metric: took 4.972999941s for fixHost
	I1123 08:45:36.326957  214489 start.go:83] releasing machines lock for "no-preload-596617", held for 4.973052224s
	I1123 08:45:36.327041  214489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-596617
	I1123 08:45:36.343318  214489 ssh_runner.go:195] Run: cat /version.json
	I1123 08:45:36.343366  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:36.343437  214489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:45:36.343499  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:36.361025  214489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:45:36.362515  214489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:45:36.466112  214489 ssh_runner.go:195] Run: systemctl --version
	I1123 08:45:36.595398  214489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:45:36.599765  214489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:45:36.599840  214489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:45:36.609111  214489 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:45:36.609177  214489 start.go:496] detecting cgroup driver to use...
	I1123 08:45:36.609212  214489 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:45:36.609275  214489 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:45:36.627766  214489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:45:36.641333  214489 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:45:36.641493  214489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:45:36.657217  214489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:45:36.669939  214489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:45:36.783552  214489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:45:36.909872  214489 docker.go:234] disabling docker service ...
	I1123 08:45:36.909988  214489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:45:36.925902  214489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:45:36.939655  214489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:45:37.081256  214489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:45:37.211762  214489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:45:37.224765  214489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:45:37.239027  214489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 08:45:37.248744  214489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:45:37.257550  214489 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 08:45:37.257655  214489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 08:45:37.266285  214489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:45:37.274831  214489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:45:37.287905  214489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:45:37.296301  214489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:45:37.304170  214489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:45:37.316409  214489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:45:37.325574  214489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:45:37.335262  214489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:45:37.342879  214489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:45:37.350627  214489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:37.480643  214489 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:45:37.652134  214489 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:45:37.652246  214489 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:45:37.656648  214489 start.go:564] Will wait 60s for crictl version
	I1123 08:45:37.656763  214489 ssh_runner.go:195] Run: which crictl
	I1123 08:45:37.660339  214489 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:45:37.690591  214489 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:45:37.690714  214489 ssh_runner.go:195] Run: containerd --version
	I1123 08:45:37.714789  214489 ssh_runner.go:195] Run: containerd --version
	I1123 08:45:37.737512  214489 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 08:45:37.740637  214489 cli_runner.go:164] Run: docker network inspect no-preload-596617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:45:37.756942  214489 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:45:37.760830  214489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:45:37.770599  214489 kubeadm.go:884] updating cluster {Name:no-preload-596617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-596617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:45:37.770724  214489 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:45:37.770782  214489 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:45:37.796463  214489 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:45:37.796491  214489 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:45:37.796499  214489 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1123 08:45:37.796597  214489 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-596617 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-596617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:45:37.796662  214489 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:45:37.824748  214489 cni.go:84] Creating CNI manager for ""
	I1123 08:45:37.824777  214489 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:45:37.824797  214489 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:45:37.824832  214489 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-596617 NodeName:no-preload-596617 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:45:37.824956  214489 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-596617"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:45:37.825034  214489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:45:37.833591  214489 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:45:37.833658  214489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:45:37.841227  214489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1123 08:45:37.855027  214489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:45:37.867614  214489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1123 08:45:37.880143  214489 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:45:37.883674  214489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:45:37.893519  214489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:38.018478  214489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:45:38.038155  214489 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617 for IP: 192.168.85.2
	I1123 08:45:38.038228  214489 certs.go:195] generating shared ca certs ...
	I1123 08:45:38.038259  214489 certs.go:227] acquiring lock for ca certs: {Name:mke0fc62f41acbef5eb3e84af3a3b8f9858bd1fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:38.038523  214489 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-2339/.minikube/ca.key
	I1123 08:45:38.038640  214489 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.key
	I1123 08:45:38.038669  214489 certs.go:257] generating profile certs ...
	I1123 08:45:38.038809  214489 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.key
	I1123 08:45:38.038978  214489 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.key.5887770e
	I1123 08:45:38.039116  214489 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.key
	I1123 08:45:38.039311  214489 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151.pem (1338 bytes)
	W1123 08:45:38.039390  214489 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151_empty.pem, impossibly tiny 0 bytes
	I1123 08:45:38.039422  214489 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 08:45:38.039499  214489 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:45:38.039568  214489 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:45:38.039634  214489 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem (1675 bytes)
	I1123 08:45:38.039736  214489 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem (1708 bytes)
	I1123 08:45:38.040620  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:45:38.064556  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:45:38.086079  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:45:38.107609  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:45:38.126755  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 08:45:38.150710  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 08:45:38.169348  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:45:38.203740  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 08:45:38.231914  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem --> /usr/share/ca-certificates/41512.pem (1708 bytes)
	I1123 08:45:38.262708  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:45:38.286526  214489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151.pem --> /usr/share/ca-certificates/4151.pem (1338 bytes)
	I1123 08:45:38.317892  214489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:45:38.334424  214489 ssh_runner.go:195] Run: openssl version
	I1123 08:45:38.342759  214489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:45:38.358198  214489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:38.362610  214489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:38.362675  214489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:38.406799  214489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:45:38.415497  214489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4151.pem && ln -fs /usr/share/ca-certificates/4151.pem /etc/ssl/certs/4151.pem"
	I1123 08:45:38.424328  214489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4151.pem
	I1123 08:45:38.428513  214489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:02 /usr/share/ca-certificates/4151.pem
	I1123 08:45:38.428580  214489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4151.pem
	I1123 08:45:38.469713  214489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4151.pem /etc/ssl/certs/51391683.0"
	I1123 08:45:38.480269  214489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41512.pem && ln -fs /usr/share/ca-certificates/41512.pem /etc/ssl/certs/41512.pem"
	I1123 08:45:38.489650  214489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41512.pem
	I1123 08:45:38.494474  214489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:02 /usr/share/ca-certificates/41512.pem
	I1123 08:45:38.494579  214489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41512.pem
	I1123 08:45:38.538710  214489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41512.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:45:38.546845  214489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:45:38.551545  214489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 08:45:38.598277  214489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 08:45:38.640067  214489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 08:45:38.683499  214489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 08:45:38.725672  214489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 08:45:38.781783  214489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 08:45:38.836583  214489 kubeadm.go:401] StartCluster: {Name:no-preload-596617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-596617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:45:38.836731  214489 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:45:38.836832  214489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:45:38.921526  214489 cri.go:89] found id: "688a87c4a6cfdbcdf8876e5686af2cb559d7878123fea8977dff105b58a52002"
	I1123 08:45:38.921602  214489 cri.go:89] found id: "91c445761c11225f240bc25605c50446bcaa23a89a3ee6c7f275c64941c44788"
	I1123 08:45:38.921622  214489 cri.go:89] found id: "4ff14e63674511be9833e17757d7ac8c83cf043c373fdfaeba96b335a278376f"
	I1123 08:45:38.921645  214489 cri.go:89] found id: "38a03d8690d80f6c742953b846418123550408e6b4fc3bc3ed61b8578754af02"
	I1123 08:45:38.921689  214489 cri.go:89] found id: "ae63305653ca8cbbd80c13dd0f9434bfc3feedc3bbff30a329f62b0559f2895a"
	I1123 08:45:38.921713  214489 cri.go:89] found id: "2e1de07c6493d308c8cde1bd08ad1af4bde14c9a11d6c18de05914a462d0021b"
	I1123 08:45:38.921733  214489 cri.go:89] found id: "3922d3ac1a3fa33fc277f69cf60fea88cb74510306d065fff3aedfcea5e11cd5"
	I1123 08:45:38.921767  214489 cri.go:89] found id: "0106e17e619c21c4f70b18b18b785e598009939262000db201255c4c23134bb6"
	I1123 08:45:38.921790  214489 cri.go:89] found id: ""
	I1123 08:45:38.921874  214489 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1123 08:45:38.959607  214489 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"430e1934b8fd510f1a03107e8e0611223df2cbe36f9ba543b1c3d0510efac384","pid":0,"status":"stopped","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/430e1934b8fd510f1a03107e8e0611223df2cbe36f9ba543b1c3d0510efac384","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/430e1934b8fd510f1a03107e8e0611223df2cbe36f9ba543b1c3d0510efac384/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"430e1934b8fd510f1a03107e8e0611223df2cbe36f9ba543b1c3d0510efac384","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-596617_95d6e38b29f32119fb34b2fd5647f69d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.
cri.sandbox-name":"etcd-no-preload-596617","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"95d6e38b29f32119fb34b2fd5647f69d"},"owner":"root"},{"ociVersion":"1.2.1","id":"6c9b0fcc2dce26957ba4fab0f2b4bf41d303b8dbb1c855117ec7e84e905ebb14","pid":0,"status":"stopped","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c9b0fcc2dce26957ba4fab0f2b4bf41d303b8dbb1c855117ec7e84e905ebb14","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c9b0fcc2dce26957ba4fab0f2b4bf41d303b8dbb1c855117ec7e84e905ebb14/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"6c9b0fcc2dce26957ba4fab0f2b4bf41d303b8dbb1c855117ec7e84e905ebb14","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kub
e-system_kube-apiserver-no-preload-596617_7fd9e0d712de079a2b15f8f5d509bcc6","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-596617","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7fd9e0d712de079a2b15f8f5d509bcc6"},"owner":"root"},{"ociVersion":"1.2.1","id":"e1c4187689b4c9800b33df6579fb8b148e2c86b39cac0b6eae4a6d75715d355c","pid":895,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1c4187689b4c9800b33df6579fb8b148e2c86b39cac0b6eae4a6d75715d355c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1c4187689b4c9800b33df6579fb8b148e2c86b39cac0b6eae4a6d75715d355c/rootfs","created":"2025-11-23T08:45:38.873599607Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.ku
bernetes.cri.sandbox-id":"e1c4187689b4c9800b33df6579fb8b148e2c86b39cac0b6eae4a6d75715d355c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-596617_1a4385a43578375499ace3c12875268a","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-596617","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1a4385a43578375499ace3c12875268a"},"owner":"root"},{"ociVersion":"1.2.1","id":"efde8f3e02fcc3b16164077e897393bd7dda5a45193f4d189fa7d2e5c562ff1a","pid":908,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/efde8f3e02fcc3b16164077e897393bd7dda5a45193f4d189fa7d2e5c562ff1a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/efde8f3e02fcc3b16164077e897393bd7dda5a45193f4d189fa7d2e5c562ff1a/rootfs","created":"2025-11-23T08:45:38.866757323Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pau
se:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"efde8f3e02fcc3b16164077e897393bd7dda5a45193f4d189fa7d2e5c562ff1a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-596617_0cce2b6da6fbc15cd83724928d768fdc","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-596617","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0cce2b6da6fbc15cd83724928d768fdc"},"owner":"root"}]
	I1123 08:45:38.959861  214489 cri.go:126] list returned 4 containers
	I1123 08:45:38.959900  214489 cri.go:129] container: {ID:430e1934b8fd510f1a03107e8e0611223df2cbe36f9ba543b1c3d0510efac384 Status:stopped}
	I1123 08:45:38.959935  214489 cri.go:131] skipping 430e1934b8fd510f1a03107e8e0611223df2cbe36f9ba543b1c3d0510efac384 - not in ps
	I1123 08:45:38.959978  214489 cri.go:129] container: {ID:6c9b0fcc2dce26957ba4fab0f2b4bf41d303b8dbb1c855117ec7e84e905ebb14 Status:stopped}
	I1123 08:45:38.960003  214489 cri.go:131] skipping 6c9b0fcc2dce26957ba4fab0f2b4bf41d303b8dbb1c855117ec7e84e905ebb14 - not in ps
	I1123 08:45:38.960043  214489 cri.go:129] container: {ID:e1c4187689b4c9800b33df6579fb8b148e2c86b39cac0b6eae4a6d75715d355c Status:created}
	I1123 08:45:38.960066  214489 cri.go:131] skipping e1c4187689b4c9800b33df6579fb8b148e2c86b39cac0b6eae4a6d75715d355c - not in ps
	I1123 08:45:38.960087  214489 cri.go:129] container: {ID:efde8f3e02fcc3b16164077e897393bd7dda5a45193f4d189fa7d2e5c562ff1a Status:created}
	I1123 08:45:38.960130  214489 cri.go:131] skipping efde8f3e02fcc3b16164077e897393bd7dda5a45193f4d189fa7d2e5c562ff1a - not in ps
	I1123 08:45:38.960223  214489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:45:38.975901  214489 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 08:45:38.975978  214489 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 08:45:38.976074  214489 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 08:45:38.991808  214489 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:45:38.992849  214489 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-596617" does not appear in /home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:45:38.993539  214489 kubeconfig.go:62] /home/jenkins/minikube-integration/21966-2339/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-596617" cluster setting kubeconfig missing "no-preload-596617" context setting]
	I1123 08:45:38.994458  214489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/kubeconfig: {Name:mka042f83263da2d190b70c2277735bf705fab5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:38.996453  214489 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 08:45:39.008852  214489 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 08:45:39.008949  214489 kubeadm.go:602] duration metric: took 32.951112ms to restartPrimaryControlPlane
	I1123 08:45:39.008972  214489 kubeadm.go:403] duration metric: took 172.411976ms to StartCluster
	I1123 08:45:39.009029  214489 settings.go:142] acquiring lock: {Name:mkfb77243b31dfe604b438e7da3f1bce2ba7b5a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:39.009138  214489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:45:39.010881  214489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/kubeconfig: {Name:mka042f83263da2d190b70c2277735bf705fab5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:39.011561  214489 config.go:182] Loaded profile config "no-preload-596617": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:45:39.011686  214489 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:45:39.011758  214489 addons.go:70] Setting storage-provisioner=true in profile "no-preload-596617"
	I1123 08:45:39.011779  214489 addons.go:239] Setting addon storage-provisioner=true in "no-preload-596617"
	W1123 08:45:39.011786  214489 addons.go:248] addon storage-provisioner should already be in state true
	I1123 08:45:39.011809  214489 host.go:66] Checking if "no-preload-596617" exists ...
	I1123 08:45:39.012299  214489 cli_runner.go:164] Run: docker container inspect no-preload-596617 --format={{.State.Status}}
	I1123 08:45:39.011649  214489 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:45:39.013730  214489 addons.go:70] Setting dashboard=true in profile "no-preload-596617"
	I1123 08:45:39.013752  214489 addons.go:239] Setting addon dashboard=true in "no-preload-596617"
	W1123 08:45:39.013760  214489 addons.go:248] addon dashboard should already be in state true
	I1123 08:45:39.013797  214489 host.go:66] Checking if "no-preload-596617" exists ...
	I1123 08:45:39.014264  214489 cli_runner.go:164] Run: docker container inspect no-preload-596617 --format={{.State.Status}}
	I1123 08:45:39.014546  214489 addons.go:70] Setting metrics-server=true in profile "no-preload-596617"
	I1123 08:45:39.014565  214489 addons.go:239] Setting addon metrics-server=true in "no-preload-596617"
	W1123 08:45:39.014572  214489 addons.go:248] addon metrics-server should already be in state true
	I1123 08:45:39.014613  214489 host.go:66] Checking if "no-preload-596617" exists ...
	I1123 08:45:39.015041  214489 cli_runner.go:164] Run: docker container inspect no-preload-596617 --format={{.State.Status}}
	I1123 08:45:39.016694  214489 addons.go:70] Setting default-storageclass=true in profile "no-preload-596617"
	I1123 08:45:39.016718  214489 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-596617"
	I1123 08:45:39.017015  214489 cli_runner.go:164] Run: docker container inspect no-preload-596617 --format={{.State.Status}}
	I1123 08:45:39.019827  214489 out.go:179] * Verifying Kubernetes components...
	I1123 08:45:39.023396  214489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:39.061751  214489 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:45:39.066392  214489 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:45:39.066426  214489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:45:39.066496  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:39.072884  214489 addons.go:239] Setting addon default-storageclass=true in "no-preload-596617"
	W1123 08:45:39.072907  214489 addons.go:248] addon default-storageclass should already be in state true
	I1123 08:45:39.074937  214489 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 08:45:39.077261  214489 host.go:66] Checking if "no-preload-596617" exists ...
	I1123 08:45:39.077773  214489 cli_runner.go:164] Run: docker container inspect no-preload-596617 --format={{.State.Status}}
	I1123 08:45:39.082511  214489 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 08:45:39.085511  214489 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 08:45:39.085535  214489 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 08:45:39.085610  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:39.109958  214489 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1123 08:45:39.116982  214489 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 08:45:39.117021  214489 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 08:45:39.117094  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:39.136342  214489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:45:39.146437  214489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:45:39.159911  214489 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:45:39.159933  214489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:45:39.160002  214489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-596617
	I1123 08:45:39.172966  214489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:45:39.196152  214489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/no-preload-596617/id_rsa Username:docker}
	I1123 08:45:39.359016  214489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:45:39.402597  214489 node_ready.go:35] waiting up to 6m0s for node "no-preload-596617" to be "Ready" ...
	I1123 08:45:39.452427  214489 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 08:45:39.452489  214489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1123 08:45:39.501356  214489 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 08:45:39.501458  214489 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 08:45:39.557125  214489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:45:39.574251  214489 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 08:45:39.574320  214489 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 08:45:39.657370  214489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:45:39.716127  214489 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 08:45:39.716203  214489 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 08:45:39.727640  214489 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 08:45:39.727716  214489 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 08:45:39.918313  214489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 08:45:39.935742  214489 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 08:45:39.935764  214489 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 08:45:40.054502  214489 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 08:45:40.054522  214489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 08:45:40.256707  214489 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 08:45:40.256729  214489 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 08:45:40.410734  214489 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 08:45:40.410756  214489 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 08:45:40.658144  214489 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 08:45:40.658166  214489 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 08:45:40.690954  214489 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 08:45:40.691005  214489 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 08:45:40.737077  214489 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:45:40.737105  214489 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 08:45:40.778424  214489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	8c3a9b57aa3fa       1611cd07b61d5       10 seconds ago       Running             busybox                   0                   98fdfe5685049       busybox                                      default
	bc7e685270adf       138784d87c9c5       16 seconds ago       Running             coredns                   0                   fc302873a148b       coredns-66bc5c9577-64zf9                     kube-system
	1643cef73498b       ba04bb24b9575       16 seconds ago       Running             storage-provisioner       0                   3a58ba9e071e8       storage-provisioner                          kube-system
	5f6d1056bb18b       05baa95f5142d       58 seconds ago       Running             kube-proxy                0                   fc3f38c04d9c9       kube-proxy-7q2pg                             kube-system
	f0d0a156acdac       b1a8c6f707935       58 seconds ago       Running             kindnet-cni               0                   85d7f63e99ad9       kindnet-cvhwv                                kube-system
	7815de9f3375d       a1894772a478e       About a minute ago   Running             etcd                      0                   0c8a6c43c5d6d       etcd-embed-certs-230843                      kube-system
	37257eb77812e       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   01015a6393b7f       kube-apiserver-embed-certs-230843            kube-system
	9080da21cc845       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   047e03a9d72c8       kube-scheduler-embed-certs-230843            kube-system
	d7d27bf5ec2ff       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   8d94c6589312f       kube-controller-manager-embed-certs-230843   kube-system
	
	
	==> containerd <==
	Nov 23 08:45:28 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:28.273344624Z" level=info msg="connecting to shim 1643cef73498bee207cedab516c684b9aa70205b15932f5d3f7a3cf78cc833b5" address="unix:///run/containerd/s/3bab59b63c6b030cf9dd07f2c509f040e6b5b34c9fe1d9fc0c0e3b394e5055d2" protocol=ttrpc version=3
	Nov 23 08:45:28 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:28.325815126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-64zf9,Uid:b07768e4-8c90-4092-a257-3ec33d787231,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc302873a148b399d70c051330e0d1a33a5b343203c5607d9127d5f919c9df85\""
	Nov 23 08:45:28 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:28.333139402Z" level=info msg="CreateContainer within sandbox \"fc302873a148b399d70c051330e0d1a33a5b343203c5607d9127d5f919c9df85\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 23 08:45:28 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:28.344481098Z" level=info msg="Container bc7e685270adfd831a8ed08727e0e6b14d01b7f188c90b088cd17be67443ead3: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:45:28 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:28.362985648Z" level=info msg="CreateContainer within sandbox \"fc302873a148b399d70c051330e0d1a33a5b343203c5607d9127d5f919c9df85\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc7e685270adfd831a8ed08727e0e6b14d01b7f188c90b088cd17be67443ead3\""
	Nov 23 08:45:28 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:28.368970740Z" level=info msg="StartContainer for \"bc7e685270adfd831a8ed08727e0e6b14d01b7f188c90b088cd17be67443ead3\""
	Nov 23 08:45:28 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:28.370308783Z" level=info msg="connecting to shim bc7e685270adfd831a8ed08727e0e6b14d01b7f188c90b088cd17be67443ead3" address="unix:///run/containerd/s/4608ece2871b1abe57a3f54e57d31f97a2cf4975fdc53616d2e4041a0a884de5" protocol=ttrpc version=3
	Nov 23 08:45:28 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:28.381683809Z" level=info msg="StartContainer for \"1643cef73498bee207cedab516c684b9aa70205b15932f5d3f7a3cf78cc833b5\" returns successfully"
	Nov 23 08:45:28 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:28.455810770Z" level=info msg="StartContainer for \"bc7e685270adfd831a8ed08727e0e6b14d01b7f188c90b088cd17be67443ead3\" returns successfully"
	Nov 23 08:45:31 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:31.800427357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:447e0831-d5fa-46df-8ee0-a7779b02f544,Namespace:default,Attempt:0,}"
	Nov 23 08:45:31 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:31.862847316Z" level=info msg="connecting to shim 98fdfe56850498bfb6e52c2e7f6543f8d80d40adc465e964996dba87fcd3831c" address="unix:///run/containerd/s/487c356479455e6d25bb08223165e1b4d328530be47437424df978a630e4659d" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:45:31 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:31.989790760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:447e0831-d5fa-46df-8ee0-a7779b02f544,Namespace:default,Attempt:0,} returns sandbox id \"98fdfe56850498bfb6e52c2e7f6543f8d80d40adc465e964996dba87fcd3831c\""
	Nov 23 08:45:32 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:32.001508001Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.200029591Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.201881939Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937189"
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.204408788Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.208337116Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.208863851Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.204065668s"
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.208907453Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.215247963Z" level=info msg="CreateContainer within sandbox \"98fdfe56850498bfb6e52c2e7f6543f8d80d40adc465e964996dba87fcd3831c\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.228278484Z" level=info msg="Container 8c3a9b57aa3fa8a526c2fff7ea5fd3f15aebed1e992b56869329db637ea5318b: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.240217052Z" level=info msg="CreateContainer within sandbox \"98fdfe56850498bfb6e52c2e7f6543f8d80d40adc465e964996dba87fcd3831c\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"8c3a9b57aa3fa8a526c2fff7ea5fd3f15aebed1e992b56869329db637ea5318b\""
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.242490303Z" level=info msg="StartContainer for \"8c3a9b57aa3fa8a526c2fff7ea5fd3f15aebed1e992b56869329db637ea5318b\""
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.243461507Z" level=info msg="connecting to shim 8c3a9b57aa3fa8a526c2fff7ea5fd3f15aebed1e992b56869329db637ea5318b" address="unix:///run/containerd/s/487c356479455e6d25bb08223165e1b4d328530be47437424df978a630e4659d" protocol=ttrpc version=3
	Nov 23 08:45:34 embed-certs-230843 containerd[757]: time="2025-11-23T08:45:34.296359280Z" level=info msg="StartContainer for \"8c3a9b57aa3fa8a526c2fff7ea5fd3f15aebed1e992b56869329db637ea5318b\" returns successfully"
	
	
	==> coredns [bc7e685270adfd831a8ed08727e0e6b14d01b7f188c90b088cd17be67443ead3] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50180 - 21689 "HINFO IN 8371299108907945111.2069241919711160450. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02338457s
	
	
	==> describe nodes <==
	Name:               embed-certs-230843
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-230843
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=embed-certs-230843
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_44_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:44:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-230843
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:45:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:45:43 +0000   Sun, 23 Nov 2025 08:44:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:45:43 +0000   Sun, 23 Nov 2025 08:44:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:45:43 +0000   Sun, 23 Nov 2025 08:44:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:45:43 +0000   Sun, 23 Nov 2025 08:45:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-230843
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                ea90e4f5-4c64-4793-a3b1-1cc79e44f0f7
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 coredns-66bc5c9577-64zf9                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     59s
	  kube-system                 etcd-embed-certs-230843                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         64s
	  kube-system                 kindnet-cvhwv                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      60s
	  kube-system                 kube-apiserver-embed-certs-230843             250m (12%)    0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-controller-manager-embed-certs-230843    200m (10%)    0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-proxy-7q2pg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-scheduler-embed-certs-230843             100m (5%)     0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 57s                kube-proxy       
	  Normal   NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 79s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 79s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  78s (x8 over 79s)  kubelet          Node embed-certs-230843 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     78s (x7 over 79s)  kubelet          Node embed-certs-230843 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    78s (x8 over 79s)  kubelet          Node embed-certs-230843 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 64s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  64s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  64s                kubelet          Node embed-certs-230843 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s                kubelet          Node embed-certs-230843 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s                kubelet          Node embed-certs-230843 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           61s                node-controller  Node embed-certs-230843 event: Registered Node embed-certs-230843 in Controller
	  Normal   NodeReady                18s                kubelet          Node embed-certs-230843 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015154] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.511595] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034200] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753844] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.833249] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:37] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/22/fs': -2
	
	
	==> etcd [7815de9f3375dcddbbc8379dd5a00b505f4db9ba3ca59f52fa41a0f7bcce5fe9] <==
	{"level":"warn","ts":"2025-11-23T08:44:34.103722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.149958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.167825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.206703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.257722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.293336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.326678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.364373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.429199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.452495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.487005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.505791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.537468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.573360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.601546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.631938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.734116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.758815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.798901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.868800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.929780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.959844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:34.991712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:35.030515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:35.157910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55952","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:45:45 up  1:28,  0 user,  load average: 3.42, 3.75, 3.18
	Linux embed-certs-230843 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f0d0a156acdacc9d4d9949e49a4372f92290702298c3fdcd060a234f9be14c60] <==
	I1123 08:44:47.477042       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:44:47.477541       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 08:44:47.477752       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:44:47.477799       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:44:47.477813       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:44:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:44:47.772640       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:44:47.772690       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:44:47.772704       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:44:47.772869       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:45:17.772417       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 08:45:17.772777       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 08:45:17.772891       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 08:45:17.772959       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1123 08:45:18.872863       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:45:18.873059       1 metrics.go:72] Registering metrics
	I1123 08:45:18.873239       1 controller.go:711] "Syncing nftables rules"
	I1123 08:45:27.690746       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:45:27.690800       1 main.go:301] handling current node
	I1123 08:45:37.690718       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:45:37.690777       1 main.go:301] handling current node
	
	
	==> kube-apiserver [37257eb77812edf6e29e98549c15886ab92f60d6da37840c23393a2fcd8bce7a] <==
	I1123 08:44:37.140947       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:44:37.144487       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:44:37.210131       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 08:44:37.210522       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:37.211372       1 aggregator.go:171] initial CRD sync complete...
	I1123 08:44:37.217477       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 08:44:37.217700       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:44:37.217777       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:44:37.518668       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:44:37.548621       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:44:37.548655       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:44:39.378523       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:44:39.467323       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:44:39.642363       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:44:39.656761       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 08:44:39.658280       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:44:39.670732       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:44:39.818019       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:44:41.450504       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:44:41.472958       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:44:41.487231       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:44:45.663110       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 08:44:45.774054       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:45.816167       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:45.993275       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d7d27bf5ec2ffab09e4a0156bf3fb41c6d2e59dbf4c6daa9f64d25e8c5f183dc] <==
	I1123 08:44:44.809478       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:44:44.816655       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-230843" podCIDRs=["10.244.0.0/24"]
	I1123 08:44:44.809464       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 08:44:44.817586       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:44:44.818948       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:44:44.819595       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:44:44.821193       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-230843"
	I1123 08:44:44.821490       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 08:44:44.821375       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:44:44.824510       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:44:44.830822       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:44:44.831019       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:44:44.837164       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:44:44.841739       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 08:44:44.856183       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:44:44.859250       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:44:44.859492       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:44:44.859736       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 08:44:44.862882       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:44:44.863093       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:44:44.870163       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 08:44:44.886406       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:44:44.886661       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:44:44.886741       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:45:29.828111       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5f6d1056bb18bb56d51f45848340b4ebd99d67eaa5c4ffd79c9af9b2446b8dbe] <==
	I1123 08:44:47.652096       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:44:47.753587       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:44:47.956621       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:44:47.956661       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 08:44:47.956771       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:44:48.047587       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:44:48.047653       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:44:48.061332       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:44:48.061681       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:44:48.061699       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:44:48.072963       1 config.go:200] "Starting service config controller"
	I1123 08:44:48.072984       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:44:48.073015       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:44:48.073020       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:44:48.073033       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:44:48.073037       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:44:48.094027       1 config.go:309] "Starting node config controller"
	I1123 08:44:48.094047       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:44:48.094055       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:44:48.174698       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:44:48.174741       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:44:48.174783       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9080da21cc8455a9549f1206f0f811a839f7627a0c2fc95eaa26193364f5ab2a] <==
	I1123 08:44:36.150358       1 serving.go:386] Generated self-signed cert in-memory
	I1123 08:44:40.586713       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:44:40.595078       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:44:40.606332       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:44:40.606627       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:44:40.606891       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:44:40.606581       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 08:44:40.607255       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 08:44:40.606644       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:44:40.614831       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:44:40.606658       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:44:40.707170       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:44:40.710120       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 08:44:40.715601       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:44:42 embed-certs-230843 kubelet[1471]: I1123 08:44:42.758167    1471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-230843" podStartSLOduration=1.758149656 podStartE2EDuration="1.758149656s" podCreationTimestamp="2025-11-23 08:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:42.738666636 +0000 UTC m=+1.380350377" watchObservedRunningTime="2025-11-23 08:44:42.758149656 +0000 UTC m=+1.399833429"
	Nov 23 08:44:42 embed-certs-230843 kubelet[1471]: I1123 08:44:42.772858    1471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-230843" podStartSLOduration=1.772746656 podStartE2EDuration="1.772746656s" podCreationTimestamp="2025-11-23 08:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:42.75805221 +0000 UTC m=+1.399735959" watchObservedRunningTime="2025-11-23 08:44:42.772746656 +0000 UTC m=+1.414430405"
	Nov 23 08:44:42 embed-certs-230843 kubelet[1471]: I1123 08:44:42.773314    1471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-230843" podStartSLOduration=1.773303775 podStartE2EDuration="1.773303775s" podCreationTimestamp="2025-11-23 08:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:42.773065348 +0000 UTC m=+1.414749097" watchObservedRunningTime="2025-11-23 08:44:42.773303775 +0000 UTC m=+1.414987524"
	Nov 23 08:44:43 embed-certs-230843 kubelet[1471]: I1123 08:44:43.602993    1471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-230843" podStartSLOduration=3.602957066 podStartE2EDuration="3.602957066s" podCreationTimestamp="2025-11-23 08:44:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:42.789700594 +0000 UTC m=+1.431384359" watchObservedRunningTime="2025-11-23 08:44:43.602957066 +0000 UTC m=+2.244640815"
	Nov 23 08:44:44 embed-certs-230843 kubelet[1471]: I1123 08:44:44.910965    1471 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:44:44 embed-certs-230843 kubelet[1471]: I1123 08:44:44.911567    1471 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:44:46 embed-certs-230843 kubelet[1471]: I1123 08:44:46.040099    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fe21e2f-c557-4c67-940e-de5d501ffa9b-xtables-lock\") pod \"kindnet-cvhwv\" (UID: \"4fe21e2f-c557-4c67-940e-de5d501ffa9b\") " pod="kube-system/kindnet-cvhwv"
	Nov 23 08:44:46 embed-certs-230843 kubelet[1471]: I1123 08:44:46.040146    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxnvx\" (UniqueName: \"kubernetes.io/projected/4fe21e2f-c557-4c67-940e-de5d501ffa9b-kube-api-access-qxnvx\") pod \"kindnet-cvhwv\" (UID: \"4fe21e2f-c557-4c67-940e-de5d501ffa9b\") " pod="kube-system/kindnet-cvhwv"
	Nov 23 08:44:46 embed-certs-230843 kubelet[1471]: I1123 08:44:46.040175    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0df10366-97bb-4703-9840-09bb1770a2ae-kube-proxy\") pod \"kube-proxy-7q2pg\" (UID: \"0df10366-97bb-4703-9840-09bb1770a2ae\") " pod="kube-system/kube-proxy-7q2pg"
	Nov 23 08:44:46 embed-certs-230843 kubelet[1471]: I1123 08:44:46.040199    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0df10366-97bb-4703-9840-09bb1770a2ae-xtables-lock\") pod \"kube-proxy-7q2pg\" (UID: \"0df10366-97bb-4703-9840-09bb1770a2ae\") " pod="kube-system/kube-proxy-7q2pg"
	Nov 23 08:44:46 embed-certs-230843 kubelet[1471]: I1123 08:44:46.040217    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46f85\" (UniqueName: \"kubernetes.io/projected/0df10366-97bb-4703-9840-09bb1770a2ae-kube-api-access-46f85\") pod \"kube-proxy-7q2pg\" (UID: \"0df10366-97bb-4703-9840-09bb1770a2ae\") " pod="kube-system/kube-proxy-7q2pg"
	Nov 23 08:44:46 embed-certs-230843 kubelet[1471]: I1123 08:44:46.040238    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0df10366-97bb-4703-9840-09bb1770a2ae-lib-modules\") pod \"kube-proxy-7q2pg\" (UID: \"0df10366-97bb-4703-9840-09bb1770a2ae\") " pod="kube-system/kube-proxy-7q2pg"
	Nov 23 08:44:46 embed-certs-230843 kubelet[1471]: I1123 08:44:46.040255    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4fe21e2f-c557-4c67-940e-de5d501ffa9b-cni-cfg\") pod \"kindnet-cvhwv\" (UID: \"4fe21e2f-c557-4c67-940e-de5d501ffa9b\") " pod="kube-system/kindnet-cvhwv"
	Nov 23 08:44:46 embed-certs-230843 kubelet[1471]: I1123 08:44:46.040283    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4fe21e2f-c557-4c67-940e-de5d501ffa9b-lib-modules\") pod \"kindnet-cvhwv\" (UID: \"4fe21e2f-c557-4c67-940e-de5d501ffa9b\") " pod="kube-system/kindnet-cvhwv"
	Nov 23 08:44:46 embed-certs-230843 kubelet[1471]: I1123 08:44:46.347789    1471 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 08:44:47 embed-certs-230843 kubelet[1471]: I1123 08:44:47.740931    1471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7q2pg" podStartSLOduration=2.740908816 podStartE2EDuration="2.740908816s" podCreationTimestamp="2025-11-23 08:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:47.740401364 +0000 UTC m=+6.382085113" watchObservedRunningTime="2025-11-23 08:44:47.740908816 +0000 UTC m=+6.382592557"
	Nov 23 08:44:47 embed-certs-230843 kubelet[1471]: I1123 08:44:47.808816    1471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cvhwv" podStartSLOduration=2.808799444 podStartE2EDuration="2.808799444s" podCreationTimestamp="2025-11-23 08:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:47.808645801 +0000 UTC m=+6.450329542" watchObservedRunningTime="2025-11-23 08:44:47.808799444 +0000 UTC m=+6.450483193"
	Nov 23 08:45:27 embed-certs-230843 kubelet[1471]: I1123 08:45:27.793231    1471 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:45:27 embed-certs-230843 kubelet[1471]: I1123 08:45:27.963256    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smzgx\" (UniqueName: \"kubernetes.io/projected/c026bb7f-0356-460c-beba-7e338e6406ec-kube-api-access-smzgx\") pod \"storage-provisioner\" (UID: \"c026bb7f-0356-460c-beba-7e338e6406ec\") " pod="kube-system/storage-provisioner"
	Nov 23 08:45:27 embed-certs-230843 kubelet[1471]: I1123 08:45:27.963317    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvdst\" (UniqueName: \"kubernetes.io/projected/b07768e4-8c90-4092-a257-3ec33d787231-kube-api-access-cvdst\") pod \"coredns-66bc5c9577-64zf9\" (UID: \"b07768e4-8c90-4092-a257-3ec33d787231\") " pod="kube-system/coredns-66bc5c9577-64zf9"
	Nov 23 08:45:27 embed-certs-230843 kubelet[1471]: I1123 08:45:27.963342    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c026bb7f-0356-460c-beba-7e338e6406ec-tmp\") pod \"storage-provisioner\" (UID: \"c026bb7f-0356-460c-beba-7e338e6406ec\") " pod="kube-system/storage-provisioner"
	Nov 23 08:45:27 embed-certs-230843 kubelet[1471]: I1123 08:45:27.963360    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b07768e4-8c90-4092-a257-3ec33d787231-config-volume\") pod \"coredns-66bc5c9577-64zf9\" (UID: \"b07768e4-8c90-4092-a257-3ec33d787231\") " pod="kube-system/coredns-66bc5c9577-64zf9"
	Nov 23 08:45:28 embed-certs-230843 kubelet[1471]: I1123 08:45:28.848172    1471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-64zf9" podStartSLOduration=42.84815334 podStartE2EDuration="42.84815334s" podCreationTimestamp="2025-11-23 08:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:28.831566156 +0000 UTC m=+47.473249905" watchObservedRunningTime="2025-11-23 08:45:28.84815334 +0000 UTC m=+47.489837081"
	Nov 23 08:45:28 embed-certs-230843 kubelet[1471]: I1123 08:45:28.865327    1471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.865307802 podStartE2EDuration="40.865307802s" podCreationTimestamp="2025-11-23 08:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:28.851006619 +0000 UTC m=+47.492690360" watchObservedRunningTime="2025-11-23 08:45:28.865307802 +0000 UTC m=+47.506991543"
	Nov 23 08:45:31 embed-certs-230843 kubelet[1471]: I1123 08:45:31.387954    1471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6bkm\" (UniqueName: \"kubernetes.io/projected/447e0831-d5fa-46df-8ee0-a7779b02f544-kube-api-access-m6bkm\") pod \"busybox\" (UID: \"447e0831-d5fa-46df-8ee0-a7779b02f544\") " pod="default/busybox"
	
	
	==> storage-provisioner [1643cef73498bee207cedab516c684b9aa70205b15932f5d3f7a3cf78cc833b5] <==
	I1123 08:45:28.400227       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:45:28.403145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:28.412955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:28.413125       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:45:28.415505       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e48fa202-bbe7-420b-9477-919a4bddc0d5", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-230843_7bf837ae-cbd5-4db1-b4ed-7ed965a3f9ec became leader
	I1123 08:45:28.415663       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-230843_7bf837ae-cbd5-4db1-b4ed-7ed965a3f9ec!
	W1123 08:45:28.429384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:28.435500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:28.516278       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-230843_7bf837ae-cbd5-4db1-b4ed-7ed965a3f9ec!
	W1123 08:45:30.439567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:30.445041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:32.448538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:32.453071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:34.455915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:34.460551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:36.464691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:36.471679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:38.475432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:38.483501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:40.492403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:40.511419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:42.516054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:42.542057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:44.548199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:44.578981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-230843 -n embed-certs-230843
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-230843 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (15.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (16.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-422900 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [92f3c4e4-b38c-4d7b-b2fb-56d47cd1c156] Pending
helpers_test.go:352: "busybox" [92f3c4e4-b38c-4d7b-b2fb-56d47cd1c156] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [92f3c4e4-b38c-4d7b-b2fb-56d47cd1c156] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003627847s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-422900 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-422900
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-422900:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "73fd58553e83a6bfc9438764d96f266e091c0db95ed497aecd3e247e9dd7e299",
	        "Created": "2025-11-23T08:46:49.639813081Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 222880,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:46:49.724512546Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/73fd58553e83a6bfc9438764d96f266e091c0db95ed497aecd3e247e9dd7e299/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/73fd58553e83a6bfc9438764d96f266e091c0db95ed497aecd3e247e9dd7e299/hostname",
	        "HostsPath": "/var/lib/docker/containers/73fd58553e83a6bfc9438764d96f266e091c0db95ed497aecd3e247e9dd7e299/hosts",
	        "LogPath": "/var/lib/docker/containers/73fd58553e83a6bfc9438764d96f266e091c0db95ed497aecd3e247e9dd7e299/73fd58553e83a6bfc9438764d96f266e091c0db95ed497aecd3e247e9dd7e299-json.log",
	        "Name": "/default-k8s-diff-port-422900",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-422900:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-422900",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "73fd58553e83a6bfc9438764d96f266e091c0db95ed497aecd3e247e9dd7e299",
	                "LowerDir": "/var/lib/docker/overlay2/4865d26f0d26d5c677e00ab1a67615b2bb27a6dcc3eab25dbdc7c868c9ef4a9f-init/diff:/var/lib/docker/overlay2/88c30082a717909d357f7d81c88a05ce3487a40d372ee6dc57fb9f012e0502da/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4865d26f0d26d5c677e00ab1a67615b2bb27a6dcc3eab25dbdc7c868c9ef4a9f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4865d26f0d26d5c677e00ab1a67615b2bb27a6dcc3eab25dbdc7c868c9ef4a9f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4865d26f0d26d5c677e00ab1a67615b2bb27a6dcc3eab25dbdc7c868c9ef4a9f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-422900",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-422900/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-422900",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-422900",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-422900",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c42cd8b8eaeefb867ee0509b6760df32e46b5e8d98e611258aa96a18705b5411",
	            "SandboxKey": "/var/run/docker/netns/c42cd8b8eaee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-422900": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:17:86:41:6e:ef",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0bcab11333d4f3fa0501963a863f2548dae4e6826f2110c6a56dead952835135",
	                    "EndpointID": "3885a910efff7067f7b170ad465dbcff76112057f693e260040723cb094ce32d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-422900",
	                        "73fd58553e83"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-422900 -n default-k8s-diff-port-422900
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-422900 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-422900 logs -n 25: (1.905129143s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p no-preload-596617 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-596617            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:46 UTC │
	│ addons  │ enable metrics-server -p embed-certs-230843 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-230843           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ stop    │ -p embed-certs-230843 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-230843           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:46 UTC │
	│ addons  │ enable dashboard -p embed-certs-230843 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-230843           │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ start   │ -p embed-certs-230843 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-230843           │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ image   │ no-preload-596617 image list --format=json                                                                                                                                                                                                          │ no-preload-596617            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ pause   │ -p no-preload-596617 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-596617            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ unpause │ -p no-preload-596617 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-596617            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ delete  │ -p no-preload-596617                                                                                                                                                                                                                                │ no-preload-596617            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ delete  │ -p no-preload-596617                                                                                                                                                                                                                                │ no-preload-596617            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ delete  │ -p disable-driver-mounts-142181                                                                                                                                                                                                                     │ disable-driver-mounts-142181 │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ start   │ -p default-k8s-diff-port-422900 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-422900 │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:48 UTC │
	│ image   │ embed-certs-230843 image list --format=json                                                                                                                                                                                                         │ embed-certs-230843           │ jenkins │ v1.37.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:47 UTC │
	│ pause   │ -p embed-certs-230843 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-230843           │ jenkins │ v1.37.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:47 UTC │
	│ unpause │ -p embed-certs-230843 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-230843           │ jenkins │ v1.37.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:47 UTC │
	│ delete  │ -p embed-certs-230843                                                                                                                                                                                                                               │ embed-certs-230843           │ jenkins │ v1.37.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:47 UTC │
	│ delete  │ -p embed-certs-230843                                                                                                                                                                                                                               │ embed-certs-230843           │ jenkins │ v1.37.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:47 UTC │
	│ start   │ -p newest-cni-009152 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-009152            │ jenkins │ v1.37.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:47 UTC │
	│ addons  │ enable metrics-server -p newest-cni-009152 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-009152            │ jenkins │ v1.37.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:47 UTC │
	│ stop    │ -p newest-cni-009152 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-009152            │ jenkins │ v1.37.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:48 UTC │
	│ addons  │ enable dashboard -p newest-cni-009152 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-009152            │ jenkins │ v1.37.0 │ 23 Nov 25 08:48 UTC │ 23 Nov 25 08:48 UTC │
	│ start   │ -p newest-cni-009152 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-009152            │ jenkins │ v1.37.0 │ 23 Nov 25 08:48 UTC │ 23 Nov 25 08:48 UTC │
	│ image   │ newest-cni-009152 image list --format=json                                                                                                                                                                                                          │ newest-cni-009152            │ jenkins │ v1.37.0 │ 23 Nov 25 08:48 UTC │ 23 Nov 25 08:48 UTC │
	│ pause   │ -p newest-cni-009152 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-009152            │ jenkins │ v1.37.0 │ 23 Nov 25 08:48 UTC │ 23 Nov 25 08:48 UTC │
	│ unpause │ -p newest-cni-009152 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-009152            │ jenkins │ v1.37.0 │ 23 Nov 25 08:48 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:48:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:48:00.580040  229421 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:48:00.580265  229421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:48:00.580396  229421 out.go:374] Setting ErrFile to fd 2...
	I1123 08:48:00.580408  229421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:48:00.580877  229421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
	I1123 08:48:00.581308  229421 out.go:368] Setting JSON to false
	I1123 08:48:00.582275  229421 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5430,"bootTime":1763882251,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 08:48:00.582347  229421 start.go:143] virtualization:  
	I1123 08:48:00.585205  229421 out.go:179] * [newest-cni-009152] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:48:00.589274  229421 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:48:00.589321  229421 notify.go:221] Checking for updates...
	I1123 08:48:00.595665  229421 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:48:00.598605  229421 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:48:00.601574  229421 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube
	I1123 08:48:00.604565  229421 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:48:00.607537  229421 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:48:00.610934  229421 config.go:182] Loaded profile config "newest-cni-009152": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:48:00.611591  229421 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:48:00.641956  229421 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:48:00.642069  229421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:48:00.700923  229421 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:48:00.691215945 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:48:00.701032  229421 docker.go:319] overlay module found
	I1123 08:48:00.704336  229421 out.go:179] * Using the docker driver based on existing profile
	I1123 08:48:00.707211  229421 start.go:309] selected driver: docker
	I1123 08:48:00.707230  229421 start.go:927] validating driver "docker" against &{Name:newest-cni-009152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-009152 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:48:00.707346  229421 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:48:00.708031  229421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:48:00.780324  229421 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:48:00.765745703 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:48:00.780654  229421 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 08:48:00.780686  229421 cni.go:84] Creating CNI manager for ""
	I1123 08:48:00.780751  229421 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:48:00.780791  229421 start.go:353] cluster config:
	{Name:newest-cni-009152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-009152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:48:00.784038  229421 out.go:179] * Starting "newest-cni-009152" primary control-plane node in "newest-cni-009152" cluster
	I1123 08:48:00.786876  229421 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:48:00.789801  229421 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:48:00.792625  229421 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:48:00.792667  229421 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1123 08:48:00.792693  229421 cache.go:65] Caching tarball of preloaded images
	I1123 08:48:00.792777  229421 preload.go:238] Found /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1123 08:48:00.792786  229421 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 08:48:00.792899  229421 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/newest-cni-009152/config.json ...
	I1123 08:48:00.793118  229421 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:48:00.813090  229421 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:48:00.813112  229421 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:48:00.813126  229421 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:48:00.813155  229421 start.go:360] acquireMachinesLock for newest-cni-009152: {Name:mkfad18d37682d570ef490054702f76faece800c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:48:00.813212  229421 start.go:364] duration metric: took 35.816µs to acquireMachinesLock for "newest-cni-009152"
	I1123 08:48:00.813234  229421 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:48:00.813244  229421 fix.go:54] fixHost starting: 
	I1123 08:48:00.813533  229421 cli_runner.go:164] Run: docker container inspect newest-cni-009152 --format={{.State.Status}}
	I1123 08:48:00.832238  229421 fix.go:112] recreateIfNeeded on newest-cni-009152: state=Stopped err=<nil>
	W1123 08:48:00.832268  229421 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 08:47:59.320137  222471 node_ready.go:57] node "default-k8s-diff-port-422900" has "Ready":"False" status (will retry)
	W1123 08:48:01.818502  222471 node_ready.go:57] node "default-k8s-diff-port-422900" has "Ready":"False" status (will retry)
	I1123 08:48:00.835396  229421 out.go:252] * Restarting existing docker container for "newest-cni-009152" ...
	I1123 08:48:00.835489  229421 cli_runner.go:164] Run: docker start newest-cni-009152
	I1123 08:48:01.095308  229421 cli_runner.go:164] Run: docker container inspect newest-cni-009152 --format={{.State.Status}}
	I1123 08:48:01.115676  229421 kic.go:430] container "newest-cni-009152" state is running.
	I1123 08:48:01.116079  229421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-009152
	I1123 08:48:01.138972  229421 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/newest-cni-009152/config.json ...
	I1123 08:48:01.139218  229421 machine.go:94] provisionDockerMachine start ...
	I1123 08:48:01.139280  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:01.164025  229421 main.go:143] libmachine: Using SSH client type: native
	I1123 08:48:01.164530  229421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1123 08:48:01.164546  229421 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:48:01.165243  229421 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 08:48:04.321701  229421 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-009152
	
	I1123 08:48:04.321722  229421 ubuntu.go:182] provisioning hostname "newest-cni-009152"
	I1123 08:48:04.321785  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:04.339380  229421 main.go:143] libmachine: Using SSH client type: native
	I1123 08:48:04.339700  229421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1123 08:48:04.339711  229421 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-009152 && echo "newest-cni-009152" | sudo tee /etc/hostname
	I1123 08:48:04.507459  229421 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-009152
	
	I1123 08:48:04.507547  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:04.526224  229421 main.go:143] libmachine: Using SSH client type: native
	I1123 08:48:04.526554  229421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1123 08:48:04.526577  229421 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-009152' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-009152/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-009152' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:48:04.678325  229421 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:48:04.678354  229421 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-2339/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-2339/.minikube}
	I1123 08:48:04.678415  229421 ubuntu.go:190] setting up certificates
	I1123 08:48:04.678425  229421 provision.go:84] configureAuth start
	I1123 08:48:04.678493  229421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-009152
	I1123 08:48:04.696845  229421 provision.go:143] copyHostCerts
	I1123 08:48:04.696922  229421 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem, removing ...
	I1123 08:48:04.696940  229421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem
	I1123 08:48:04.697022  229421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem (1078 bytes)
	I1123 08:48:04.697129  229421 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem, removing ...
	I1123 08:48:04.697140  229421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem
	I1123 08:48:04.697168  229421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem (1123 bytes)
	I1123 08:48:04.697276  229421 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem, removing ...
	I1123 08:48:04.697286  229421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem
	I1123 08:48:04.697313  229421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem (1675 bytes)
	I1123 08:48:04.697392  229421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem org=jenkins.newest-cni-009152 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-009152]
	I1123 08:48:05.089396  229421 provision.go:177] copyRemoteCerts
	I1123 08:48:05.089475  229421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:48:05.089528  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:05.106868  229421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/newest-cni-009152/id_rsa Username:docker}
	I1123 08:48:05.218000  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:48:05.237668  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:48:05.256033  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:48:05.274295  229421 provision.go:87] duration metric: took 595.847239ms to configureAuth
	I1123 08:48:05.274321  229421 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:48:05.274530  229421 config.go:182] Loaded profile config "newest-cni-009152": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:48:05.274543  229421 machine.go:97] duration metric: took 4.135313834s to provisionDockerMachine
	I1123 08:48:05.274552  229421 start.go:293] postStartSetup for "newest-cni-009152" (driver="docker")
	I1123 08:48:05.274561  229421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:48:05.274610  229421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:48:05.274661  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:05.292704  229421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/newest-cni-009152/id_rsa Username:docker}
	I1123 08:48:05.402991  229421 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:48:05.407371  229421 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:48:05.407397  229421 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:48:05.407408  229421 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-2339/.minikube/addons for local assets ...
	I1123 08:48:05.407460  229421 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-2339/.minikube/files for local assets ...
	I1123 08:48:05.407550  229421 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem -> 41512.pem in /etc/ssl/certs
	I1123 08:48:05.407656  229421 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:48:05.415499  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem --> /etc/ssl/certs/41512.pem (1708 bytes)
	I1123 08:48:05.446181  229421 start.go:296] duration metric: took 171.615244ms for postStartSetup
	I1123 08:48:05.446299  229421 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:48:05.446342  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:05.474676  229421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/newest-cni-009152/id_rsa Username:docker}
	I1123 08:48:05.582898  229421 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:48:05.587855  229421 fix.go:56] duration metric: took 4.774604736s for fixHost
	I1123 08:48:05.587880  229421 start.go:83] releasing machines lock for "newest-cni-009152", held for 4.774655378s
	I1123 08:48:05.587946  229421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-009152
	I1123 08:48:05.604807  229421 ssh_runner.go:195] Run: cat /version.json
	I1123 08:48:05.604974  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:05.605170  229421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:48:05.605314  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:05.622711  229421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/newest-cni-009152/id_rsa Username:docker}
	I1123 08:48:05.629394  229421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/newest-cni-009152/id_rsa Username:docker}
	I1123 08:48:05.729429  229421 ssh_runner.go:195] Run: systemctl --version
	I1123 08:48:05.839367  229421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:48:05.846647  229421 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:48:05.846789  229421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:48:05.859158  229421 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:48:05.859229  229421 start.go:496] detecting cgroup driver to use...
	I1123 08:48:05.859276  229421 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:48:05.859352  229421 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:48:05.891062  229421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:48:05.912053  229421 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:48:05.912197  229421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:48:05.932058  229421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:48:05.953005  229421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:48:06.163900  229421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:48:06.314301  229421 docker.go:234] disabling docker service ...
	I1123 08:48:06.314413  229421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:48:06.330820  229421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:48:06.345523  229421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:48:06.470783  229421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:48:06.600536  229421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:48:06.616363  229421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:48:06.634685  229421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 08:48:06.644920  229421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:48:06.656025  229421 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 08:48:06.656180  229421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 08:48:06.668789  229421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:48:06.678506  229421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:48:06.687517  229421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:48:06.701469  229421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:48:06.709379  229421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:48:06.720326  229421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:48:06.729614  229421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:48:06.739646  229421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:48:06.747908  229421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:48:06.755691  229421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:48:06.876943  229421 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:48:07.076766  229421 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:48:07.076890  229421 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:48:07.081365  229421 start.go:564] Will wait 60s for crictl version
	I1123 08:48:07.081473  229421 ssh_runner.go:195] Run: which crictl
	I1123 08:48:07.085252  229421 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:48:07.112497  229421 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:48:07.112562  229421 ssh_runner.go:195] Run: containerd --version
	I1123 08:48:07.133026  229421 ssh_runner.go:195] Run: containerd --version
	I1123 08:48:07.156560  229421 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 08:48:07.159666  229421 cli_runner.go:164] Run: docker network inspect newest-cni-009152 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:48:07.176054  229421 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 08:48:07.179868  229421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:48:07.192716  229421 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 08:48:07.195656  229421 kubeadm.go:884] updating cluster {Name:newest-cni-009152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-009152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:48:07.195814  229421 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:48:07.195901  229421 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:48:07.220885  229421 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:48:07.220913  229421 containerd.go:534] Images already preloaded, skipping extraction
	I1123 08:48:07.220969  229421 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:48:07.245884  229421 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:48:07.245907  229421 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:48:07.245921  229421 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1123 08:48:07.246026  229421 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-009152 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-009152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:48:07.246089  229421 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:48:07.271202  229421 cni.go:84] Creating CNI manager for ""
	I1123 08:48:07.271278  229421 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:48:07.271306  229421 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 08:48:07.271334  229421 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-009152 NodeName:newest-cni-009152 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:48:07.271452  229421 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-009152"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:48:07.271520  229421 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:48:07.279801  229421 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:48:07.279870  229421 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:48:07.287268  229421 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1123 08:48:07.300368  229421 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:48:07.314014  229421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1123 08:48:07.327313  229421 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:48:07.331351  229421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:48:07.342245  229421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:48:07.483475  229421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:48:07.500759  229421 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/newest-cni-009152 for IP: 192.168.76.2
	I1123 08:48:07.500822  229421 certs.go:195] generating shared ca certs ...
	I1123 08:48:07.500851  229421 certs.go:227] acquiring lock for ca certs: {Name:mke0fc62f41acbef5eb3e84af3a3b8f9858bd1fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:48:07.501018  229421 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-2339/.minikube/ca.key
	I1123 08:48:07.501086  229421 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.key
	I1123 08:48:07.501108  229421 certs.go:257] generating profile certs ...
	I1123 08:48:07.501253  229421 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/newest-cni-009152/client.key
	I1123 08:48:07.501375  229421 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/newest-cni-009152/apiserver.key.ab0208e7
	I1123 08:48:07.501484  229421 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/newest-cni-009152/proxy-client.key
	I1123 08:48:07.501645  229421 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151.pem (1338 bytes)
	W1123 08:48:07.501708  229421 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151_empty.pem, impossibly tiny 0 bytes
	I1123 08:48:07.501736  229421 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 08:48:07.501804  229421 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:48:07.501874  229421 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:48:07.501929  229421 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem (1675 bytes)
	I1123 08:48:07.502014  229421 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem (1708 bytes)
	I1123 08:48:07.502675  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:48:07.525114  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:48:07.545868  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:48:07.573240  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:48:07.595183  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/newest-cni-009152/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 08:48:07.618043  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/newest-cni-009152/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 08:48:07.645670  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/newest-cni-009152/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:48:07.670538  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/newest-cni-009152/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:48:07.696686  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem --> /usr/share/ca-certificates/41512.pem (1708 bytes)
	I1123 08:48:07.717536  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:48:07.737652  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151.pem --> /usr/share/ca-certificates/4151.pem (1338 bytes)
	I1123 08:48:07.775860  229421 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:48:07.790750  229421 ssh_runner.go:195] Run: openssl version
	I1123 08:48:07.797210  229421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4151.pem && ln -fs /usr/share/ca-certificates/4151.pem /etc/ssl/certs/4151.pem"
	I1123 08:48:07.806727  229421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4151.pem
	I1123 08:48:07.810532  229421 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:02 /usr/share/ca-certificates/4151.pem
	I1123 08:48:07.810591  229421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4151.pem
	I1123 08:48:07.855410  229421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4151.pem /etc/ssl/certs/51391683.0"
	I1123 08:48:07.863813  229421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41512.pem && ln -fs /usr/share/ca-certificates/41512.pem /etc/ssl/certs/41512.pem"
	I1123 08:48:07.872253  229421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41512.pem
	I1123 08:48:07.876353  229421 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:02 /usr/share/ca-certificates/41512.pem
	I1123 08:48:07.876423  229421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41512.pem
	I1123 08:48:07.918130  229421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41512.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:48:07.928752  229421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:48:07.937491  229421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:48:07.942162  229421 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:48:07.942272  229421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:48:07.984268  229421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:48:07.992400  229421 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:48:07.996358  229421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 08:48:08.039219  229421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 08:48:08.084186  229421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 08:48:08.156484  229421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 08:48:08.242726  229421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 08:48:08.350770  229421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 08:48:08.422353  229421 kubeadm.go:401] StartCluster: {Name:newest-cni-009152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-009152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:48:08.422493  229421 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:48:08.422599  229421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:48:08.471286  229421 cri.go:89] found id: "716f656958115b476db8b3d9c659d338e6a2b4d1f87f1868d91c6f2184c70649"
	I1123 08:48:08.471352  229421 cri.go:89] found id: "ddd51d511f888522e584ed8709efba7357686c844f772619cd8773eba928139e"
	I1123 08:48:08.471370  229421 cri.go:89] found id: "4ffb307ab190b3e1ff6a1406c7bf6f69843646f21a7da65cd752bb94c40e82c0"
	I1123 08:48:08.471390  229421 cri.go:89] found id: "6c58cdad0d6313772221b757d8fa63638924cccdf9e9d7bbf2beb6813a79c535"
	I1123 08:48:08.471432  229421 cri.go:89] found id: "902418d2981f97a115cdb0ede99d88cd007b2db3240cae7efd77c1679d90e61e"
	I1123 08:48:08.471454  229421 cri.go:89] found id: "92ef837b4105380bccb8d8576672d181ac258d2d1ad4e45968334f0f89a30820"
	I1123 08:48:08.471472  229421 cri.go:89] found id: "d453ae4cd1b9dff9ea395efb45c3124cff49a2026c5f8691b7e0495d0432a5f8"
	I1123 08:48:08.471515  229421 cri.go:89] found id: ""
	I1123 08:48:08.471602  229421 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1123 08:48:08.510835  229421 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"173d20f1e53779ab42aeb1598e4f201d51312a4584734b0739b1749fef6d90e4","pid":898,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/173d20f1e53779ab42aeb1598e4f201d51312a4584734b0739b1749fef6d90e4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/173d20f1e53779ab42aeb1598e4f201d51312a4584734b0739b1749fef6d90e4/rootfs","created":"2025-11-23T08:48:08.23566237Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"173d20f1e53779ab42aeb1598e4f201d51312a4584734b0739b1749fef6d90e4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-009152_252826eb0083650d177174ae5b43e593","io.kubernetes.cri.sandbox-memory":"0","io.
kubernetes.cri.sandbox-name":"etcd-newest-cni-009152","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"252826eb0083650d177174ae5b43e593"},"owner":"root"},{"ociVersion":"1.2.1","id":"6c2d6373498a20e36926c04dad32fc3eb54ec820798cd4f0a06f6e5d73f29ed0","pid":913,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c2d6373498a20e36926c04dad32fc3eb54ec820798cd4f0a06f6e5d73f29ed0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c2d6373498a20e36926c04dad32fc3eb54ec820798cd4f0a06f6e5d73f29ed0/rootfs","created":"2025-11-23T08:48:08.282438647Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"6c2d6373498a20e36926c04dad32fc3eb54ec820798cd4f0a06f6e5d73f29ed0","io.kubernetes.cri.sandbox-log-direct
ory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-009152_e0f5ead57064c4ed80d6bd6c76760288","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-009152","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e0f5ead57064c4ed80d6bd6c76760288"},"owner":"root"},{"ociVersion":"1.2.1","id":"716f656958115b476db8b3d9c659d338e6a2b4d1f87f1868d91c6f2184c70649","pid":0,"status":"stopped","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/716f656958115b476db8b3d9c659d338e6a2b4d1f87f1868d91c6f2184c70649","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/716f656958115b476db8b3d9c659d338e6a2b4d1f87f1868d91c6f2184c70649/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"173d20f1e53779ab42aeb1598e4f201d51312a4584734b0739b1749fef6d90e4"
,"io.kubernetes.cri.sandbox-name":"etcd-newest-cni-009152","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"252826eb0083650d177174ae5b43e593"},"owner":"root"},{"ociVersion":"1.2.1","id":"b52f8bf998b33df8226ed6493ac4efda8cc0486d15e2d080b9cb4d058ee6007f","pid":930,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b52f8bf998b33df8226ed6493ac4efda8cc0486d15e2d080b9cb4d058ee6007f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b52f8bf998b33df8226ed6493ac4efda8cc0486d15e2d080b9cb4d058ee6007f/rootfs","created":"2025-11-23T08:48:08.316251053Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"b52f8bf998b33df8226ed6493ac4efda8cc0486d15e2d080b9cb4d058ee6007f","io.kubernetes.cri.sandbox-log-d
irectory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-009152_8d8a1e0cc09a0133e35438ddf9c67296","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-009152","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8d8a1e0cc09a0133e35438ddf9c67296"},"owner":"root"},{"ociVersion":"1.2.1","id":"fe9c0ab120d1aaf95726bd9f3f5508b86f5a967edf4fed3ac863bb84c39a52d8","pid":945,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe9c0ab120d1aaf95726bd9f3f5508b86f5a967edf4fed3ac863bb84c39a52d8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe9c0ab120d1aaf95726bd9f3f5508b86f5a967edf4fed3ac863bb84c39a52d8/rootfs","created":"2025-11-23T08:48:08.340324045Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.k
ubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"fe9c0ab120d1aaf95726bd9f3f5508b86f5a967edf4fed3ac863bb84c39a52d8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-009152_b4a7a5eb75e7073e58143d1a525e35d5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-009152","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b4a7a5eb75e7073e58143d1a525e35d5"},"owner":"root"}]
	I1123 08:48:08.511067  229421 cri.go:126] list returned 5 containers
	I1123 08:48:08.511097  229421 cri.go:129] container: {ID:173d20f1e53779ab42aeb1598e4f201d51312a4584734b0739b1749fef6d90e4 Status:running}
	I1123 08:48:08.511125  229421 cri.go:131] skipping 173d20f1e53779ab42aeb1598e4f201d51312a4584734b0739b1749fef6d90e4 - not in ps
	I1123 08:48:08.511160  229421 cri.go:129] container: {ID:6c2d6373498a20e36926c04dad32fc3eb54ec820798cd4f0a06f6e5d73f29ed0 Status:running}
	I1123 08:48:08.511185  229421 cri.go:131] skipping 6c2d6373498a20e36926c04dad32fc3eb54ec820798cd4f0a06f6e5d73f29ed0 - not in ps
	I1123 08:48:08.511211  229421 cri.go:129] container: {ID:716f656958115b476db8b3d9c659d338e6a2b4d1f87f1868d91c6f2184c70649 Status:stopped}
	I1123 08:48:08.511249  229421 cri.go:135] skipping {716f656958115b476db8b3d9c659d338e6a2b4d1f87f1868d91c6f2184c70649 stopped}: state = "stopped", want "paused"
	I1123 08:48:08.511277  229421 cri.go:129] container: {ID:b52f8bf998b33df8226ed6493ac4efda8cc0486d15e2d080b9cb4d058ee6007f Status:running}
	I1123 08:48:08.511300  229421 cri.go:131] skipping b52f8bf998b33df8226ed6493ac4efda8cc0486d15e2d080b9cb4d058ee6007f - not in ps
	I1123 08:48:08.511336  229421 cri.go:129] container: {ID:fe9c0ab120d1aaf95726bd9f3f5508b86f5a967edf4fed3ac863bb84c39a52d8 Status:running}
	I1123 08:48:08.511362  229421 cri.go:131] skipping fe9c0ab120d1aaf95726bd9f3f5508b86f5a967edf4fed3ac863bb84c39a52d8 - not in ps
	I1123 08:48:08.511454  229421 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:48:08.521267  229421 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 08:48:08.521332  229421 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 08:48:08.521465  229421 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 08:48:08.534177  229421 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:48:08.534879  229421 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-009152" does not appear in /home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:48:08.535208  229421 kubeconfig.go:62] /home/jenkins/minikube-integration/21966-2339/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-009152" cluster setting kubeconfig missing "newest-cni-009152" context setting]
	I1123 08:48:08.535715  229421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/kubeconfig: {Name:mka042f83263da2d190b70c2277735bf705fab5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:48:08.537553  229421 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 08:48:08.546234  229421 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 08:48:08.546317  229421 kubeadm.go:602] duration metric: took 24.94986ms to restartPrimaryControlPlane
	I1123 08:48:08.546344  229421 kubeadm.go:403] duration metric: took 124.00232ms to StartCluster
	I1123 08:48:08.546373  229421 settings.go:142] acquiring lock: {Name:mkfb77243b31dfe604b438e7da3f1bce2ba7b5a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:48:08.546458  229421 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:48:08.547429  229421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/kubeconfig: {Name:mka042f83263da2d190b70c2277735bf705fab5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:48:08.547695  229421 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:48:08.548072  229421 config.go:182] Loaded profile config "newest-cni-009152": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:48:08.548147  229421 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:48:08.548396  229421 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-009152"
	I1123 08:48:08.548424  229421 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-009152"
	W1123 08:48:08.548490  229421 addons.go:248] addon storage-provisioner should already be in state true
	I1123 08:48:08.548528  229421 host.go:66] Checking if "newest-cni-009152" exists ...
	I1123 08:48:08.548457  229421 addons.go:70] Setting default-storageclass=true in profile "newest-cni-009152"
	I1123 08:48:08.548667  229421 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-009152"
	I1123 08:48:08.548971  229421 cli_runner.go:164] Run: docker container inspect newest-cni-009152 --format={{.State.Status}}
	I1123 08:48:08.548463  229421 addons.go:70] Setting dashboard=true in profile "newest-cni-009152"
	I1123 08:48:08.549481  229421 addons.go:239] Setting addon dashboard=true in "newest-cni-009152"
	W1123 08:48:08.549488  229421 addons.go:248] addon dashboard should already be in state true
	I1123 08:48:08.549506  229421 host.go:66] Checking if "newest-cni-009152" exists ...
	I1123 08:48:08.549893  229421 cli_runner.go:164] Run: docker container inspect newest-cni-009152 --format={{.State.Status}}
	I1123 08:48:08.550237  229421 cli_runner.go:164] Run: docker container inspect newest-cni-009152 --format={{.State.Status}}
	I1123 08:48:08.556864  229421 out.go:179] * Verifying Kubernetes components...
	I1123 08:48:08.548472  229421 addons.go:70] Setting metrics-server=true in profile "newest-cni-009152"
	I1123 08:48:08.557227  229421 addons.go:239] Setting addon metrics-server=true in "newest-cni-009152"
	W1123 08:48:08.557259  229421 addons.go:248] addon metrics-server should already be in state true
	I1123 08:48:08.557402  229421 host.go:66] Checking if "newest-cni-009152" exists ...
	I1123 08:48:08.558713  229421 cli_runner.go:164] Run: docker container inspect newest-cni-009152 --format={{.State.Status}}
	I1123 08:48:08.562321  229421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:48:08.626500  229421 addons.go:239] Setting addon default-storageclass=true in "newest-cni-009152"
	W1123 08:48:08.626528  229421 addons.go:248] addon default-storageclass should already be in state true
	I1123 08:48:08.626553  229421 host.go:66] Checking if "newest-cni-009152" exists ...
	I1123 08:48:08.626960  229421 cli_runner.go:164] Run: docker container inspect newest-cni-009152 --format={{.State.Status}}
	I1123 08:48:08.641566  229421 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:48:08.644610  229421 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1123 08:48:08.644885  229421 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:48:08.644933  229421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:48:08.645010  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:08.650316  229421 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 08:48:08.650350  229421 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 08:48:08.650424  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:08.656483  229421 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 08:48:08.659424  229421 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1123 08:48:04.318926  222471 node_ready.go:57] node "default-k8s-diff-port-422900" has "Ready":"False" status (will retry)
	I1123 08:48:05.818344  222471 node_ready.go:49] node "default-k8s-diff-port-422900" is "Ready"
	I1123 08:48:05.818368  222471 node_ready.go:38] duration metric: took 41.002870002s for node "default-k8s-diff-port-422900" to be "Ready" ...
	I1123 08:48:05.818383  222471 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:48:05.818437  222471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:48:05.834937  222471 api_server.go:72] duration metric: took 42.130112502s to wait for apiserver process to appear ...
	I1123 08:48:05.834958  222471 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:48:05.835012  222471 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 08:48:05.861996  222471 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 08:48:05.863198  222471 api_server.go:141] control plane version: v1.34.1
	I1123 08:48:05.863220  222471 api_server.go:131] duration metric: took 28.256681ms to wait for apiserver health ...
	I1123 08:48:05.863230  222471 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:48:05.876222  222471 system_pods.go:59] 8 kube-system pods found
	I1123 08:48:05.876261  222471 system_pods.go:61] "coredns-66bc5c9577-qctlw" [54e1b924-5413-4e3d-ad3c-51f6af499016] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:48:05.876269  222471 system_pods.go:61] "etcd-default-k8s-diff-port-422900" [b1097f2c-a920-47df-8a6a-ab3b1f003a40] Running
	I1123 08:48:05.876275  222471 system_pods.go:61] "kindnet-f2zrk" [016a8003-854a-4072-bd80-6ecf03b5af32] Running
	I1123 08:48:05.876279  222471 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-422900" [6b52ca07-9c05-48d2-bc44-a7e79de91ca1] Running
	I1123 08:48:05.876283  222471 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-422900" [d6fb362a-72ec-4434-a8f9-8a378f153e0b] Running
	I1123 08:48:05.876287  222471 system_pods.go:61] "kube-proxy-jrwr5" [83f0d2e5-4c5a-443e-acbe-533cd427a3f5] Running
	I1123 08:48:05.876290  222471 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-422900" [a6752c80-7231-4330-bc95-6b68b22b3696] Running
	I1123 08:48:05.876295  222471 system_pods.go:61] "storage-provisioner" [5e808c7e-c721-46a8-96ed-969c255a51eb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:48:05.876302  222471 system_pods.go:74] duration metric: took 13.066445ms to wait for pod list to return data ...
	I1123 08:48:05.876311  222471 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:48:05.879409  222471 default_sa.go:45] found service account: "default"
	I1123 08:48:05.879483  222471 default_sa.go:55] duration metric: took 3.153818ms for default service account to be created ...
	I1123 08:48:05.879507  222471 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:48:05.883487  222471 system_pods.go:86] 8 kube-system pods found
	I1123 08:48:05.883569  222471 system_pods.go:89] "coredns-66bc5c9577-qctlw" [54e1b924-5413-4e3d-ad3c-51f6af499016] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:48:05.883592  222471 system_pods.go:89] "etcd-default-k8s-diff-port-422900" [b1097f2c-a920-47df-8a6a-ab3b1f003a40] Running
	I1123 08:48:05.883629  222471 system_pods.go:89] "kindnet-f2zrk" [016a8003-854a-4072-bd80-6ecf03b5af32] Running
	I1123 08:48:05.883653  222471 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-422900" [6b52ca07-9c05-48d2-bc44-a7e79de91ca1] Running
	I1123 08:48:05.883674  222471 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-422900" [d6fb362a-72ec-4434-a8f9-8a378f153e0b] Running
	I1123 08:48:05.883719  222471 system_pods.go:89] "kube-proxy-jrwr5" [83f0d2e5-4c5a-443e-acbe-533cd427a3f5] Running
	I1123 08:48:05.883744  222471 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-422900" [a6752c80-7231-4330-bc95-6b68b22b3696] Running
	I1123 08:48:05.883766  222471 system_pods.go:89] "storage-provisioner" [5e808c7e-c721-46a8-96ed-969c255a51eb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:48:05.883872  222471 retry.go:31] will retry after 307.730013ms: missing components: kube-dns
	I1123 08:48:06.196026  222471 system_pods.go:86] 8 kube-system pods found
	I1123 08:48:06.196136  222471 system_pods.go:89] "coredns-66bc5c9577-qctlw" [54e1b924-5413-4e3d-ad3c-51f6af499016] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:48:06.196182  222471 system_pods.go:89] "etcd-default-k8s-diff-port-422900" [b1097f2c-a920-47df-8a6a-ab3b1f003a40] Running
	I1123 08:48:06.196218  222471 system_pods.go:89] "kindnet-f2zrk" [016a8003-854a-4072-bd80-6ecf03b5af32] Running
	I1123 08:48:06.196238  222471 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-422900" [6b52ca07-9c05-48d2-bc44-a7e79de91ca1] Running
	I1123 08:48:06.196277  222471 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-422900" [d6fb362a-72ec-4434-a8f9-8a378f153e0b] Running
	I1123 08:48:06.196301  222471 system_pods.go:89] "kube-proxy-jrwr5" [83f0d2e5-4c5a-443e-acbe-533cd427a3f5] Running
	I1123 08:48:06.196323  222471 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-422900" [a6752c80-7231-4330-bc95-6b68b22b3696] Running
	I1123 08:48:06.196373  222471 system_pods.go:89] "storage-provisioner" [5e808c7e-c721-46a8-96ed-969c255a51eb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:48:06.196418  222471 retry.go:31] will retry after 350.493058ms: missing components: kube-dns
	I1123 08:48:06.552818  222471 system_pods.go:86] 8 kube-system pods found
	I1123 08:48:06.552900  222471 system_pods.go:89] "coredns-66bc5c9577-qctlw" [54e1b924-5413-4e3d-ad3c-51f6af499016] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:48:06.552921  222471 system_pods.go:89] "etcd-default-k8s-diff-port-422900" [b1097f2c-a920-47df-8a6a-ab3b1f003a40] Running
	I1123 08:48:06.552942  222471 system_pods.go:89] "kindnet-f2zrk" [016a8003-854a-4072-bd80-6ecf03b5af32] Running
	I1123 08:48:06.552979  222471 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-422900" [6b52ca07-9c05-48d2-bc44-a7e79de91ca1] Running
	I1123 08:48:06.552997  222471 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-422900" [d6fb362a-72ec-4434-a8f9-8a378f153e0b] Running
	I1123 08:48:06.553018  222471 system_pods.go:89] "kube-proxy-jrwr5" [83f0d2e5-4c5a-443e-acbe-533cd427a3f5] Running
	I1123 08:48:06.553054  222471 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-422900" [a6752c80-7231-4330-bc95-6b68b22b3696] Running
	I1123 08:48:06.553081  222471 system_pods.go:89] "storage-provisioner" [5e808c7e-c721-46a8-96ed-969c255a51eb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:48:06.553112  222471 retry.go:31] will retry after 345.301251ms: missing components: kube-dns
	I1123 08:48:06.906318  222471 system_pods.go:86] 8 kube-system pods found
	I1123 08:48:06.906350  222471 system_pods.go:89] "coredns-66bc5c9577-qctlw" [54e1b924-5413-4e3d-ad3c-51f6af499016] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:48:06.906357  222471 system_pods.go:89] "etcd-default-k8s-diff-port-422900" [b1097f2c-a920-47df-8a6a-ab3b1f003a40] Running
	I1123 08:48:06.906364  222471 system_pods.go:89] "kindnet-f2zrk" [016a8003-854a-4072-bd80-6ecf03b5af32] Running
	I1123 08:48:06.906370  222471 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-422900" [6b52ca07-9c05-48d2-bc44-a7e79de91ca1] Running
	I1123 08:48:06.906374  222471 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-422900" [d6fb362a-72ec-4434-a8f9-8a378f153e0b] Running
	I1123 08:48:06.906378  222471 system_pods.go:89] "kube-proxy-jrwr5" [83f0d2e5-4c5a-443e-acbe-533cd427a3f5] Running
	I1123 08:48:06.906383  222471 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-422900" [a6752c80-7231-4330-bc95-6b68b22b3696] Running
	I1123 08:48:06.906388  222471 system_pods.go:89] "storage-provisioner" [5e808c7e-c721-46a8-96ed-969c255a51eb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:48:06.906403  222471 retry.go:31] will retry after 424.887908ms: missing components: kube-dns
	I1123 08:48:07.336647  222471 system_pods.go:86] 8 kube-system pods found
	I1123 08:48:07.336675  222471 system_pods.go:89] "coredns-66bc5c9577-qctlw" [54e1b924-5413-4e3d-ad3c-51f6af499016] Running
	I1123 08:48:07.336683  222471 system_pods.go:89] "etcd-default-k8s-diff-port-422900" [b1097f2c-a920-47df-8a6a-ab3b1f003a40] Running
	I1123 08:48:07.336689  222471 system_pods.go:89] "kindnet-f2zrk" [016a8003-854a-4072-bd80-6ecf03b5af32] Running
	I1123 08:48:07.336694  222471 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-422900" [6b52ca07-9c05-48d2-bc44-a7e79de91ca1] Running
	I1123 08:48:07.336698  222471 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-422900" [d6fb362a-72ec-4434-a8f9-8a378f153e0b] Running
	I1123 08:48:07.336703  222471 system_pods.go:89] "kube-proxy-jrwr5" [83f0d2e5-4c5a-443e-acbe-533cd427a3f5] Running
	I1123 08:48:07.336708  222471 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-422900" [a6752c80-7231-4330-bc95-6b68b22b3696] Running
	I1123 08:48:07.336712  222471 system_pods.go:89] "storage-provisioner" [5e808c7e-c721-46a8-96ed-969c255a51eb] Running
	I1123 08:48:07.336719  222471 system_pods.go:126] duration metric: took 1.457190611s to wait for k8s-apps to be running ...
	I1123 08:48:07.336726  222471 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:48:07.336780  222471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:48:07.356255  222471 system_svc.go:56] duration metric: took 19.519641ms WaitForService to wait for kubelet
	I1123 08:48:07.356282  222471 kubeadm.go:587] duration metric: took 43.651461152s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:48:07.356298  222471 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:48:07.359129  222471 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:48:07.359155  222471 node_conditions.go:123] node cpu capacity is 2
	I1123 08:48:07.359170  222471 node_conditions.go:105] duration metric: took 2.866956ms to run NodePressure ...
	I1123 08:48:07.359232  222471 start.go:242] waiting for startup goroutines ...
	I1123 08:48:07.359245  222471 start.go:247] waiting for cluster config update ...
	I1123 08:48:07.359256  222471 start.go:256] writing updated cluster config ...
	I1123 08:48:07.359597  222471 ssh_runner.go:195] Run: rm -f paused
	I1123 08:48:07.364226  222471 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:48:07.367842  222471 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qctlw" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:07.372827  222471 pod_ready.go:94] pod "coredns-66bc5c9577-qctlw" is "Ready"
	I1123 08:48:07.372848  222471 pod_ready.go:86] duration metric: took 4.983453ms for pod "coredns-66bc5c9577-qctlw" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:07.375299  222471 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-422900" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:07.390495  222471 pod_ready.go:94] pod "etcd-default-k8s-diff-port-422900" is "Ready"
	I1123 08:48:07.390573  222471 pod_ready.go:86] duration metric: took 15.202298ms for pod "etcd-default-k8s-diff-port-422900" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:07.394739  222471 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-422900" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:07.402177  222471 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-422900" is "Ready"
	I1123 08:48:07.402254  222471 pod_ready.go:86] duration metric: took 7.441426ms for pod "kube-apiserver-default-k8s-diff-port-422900" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:07.407443  222471 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-422900" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:07.771605  222471 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-422900" is "Ready"
	I1123 08:48:07.771634  222471 pod_ready.go:86] duration metric: took 364.12813ms for pod "kube-controller-manager-default-k8s-diff-port-422900" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:07.969574  222471 pod_ready.go:83] waiting for pod "kube-proxy-jrwr5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:08.368482  222471 pod_ready.go:94] pod "kube-proxy-jrwr5" is "Ready"
	I1123 08:48:08.368511  222471 pod_ready.go:86] duration metric: took 398.901022ms for pod "kube-proxy-jrwr5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:08.570122  222471 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-422900" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:08.968182  222471 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-422900" is "Ready"
	I1123 08:48:08.968207  222471 pod_ready.go:86] duration metric: took 398.061382ms for pod "kube-scheduler-default-k8s-diff-port-422900" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:08.968218  222471 pod_ready.go:40] duration metric: took 1.603964206s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:48:09.102910  222471 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 08:48:09.107910  222471 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-422900" cluster and "default" namespace by default
	I1123 08:48:08.662313  229421 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 08:48:08.662340  229421 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 08:48:08.662408  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:08.709822  229421 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:48:08.709843  229421 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:48:08.709902  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:08.726707  229421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/newest-cni-009152/id_rsa Username:docker}
	I1123 08:48:08.732799  229421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/newest-cni-009152/id_rsa Username:docker}
	I1123 08:48:08.739352  229421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/newest-cni-009152/id_rsa Username:docker}
	I1123 08:48:08.755880  229421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/newest-cni-009152/id_rsa Username:docker}
	I1123 08:48:08.954074  229421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:48:08.973390  229421 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:48:08.973522  229421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:48:09.105676  229421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:48:09.304753  229421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:48:09.338080  229421 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 08:48:09.338106  229421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1123 08:48:09.417965  229421 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 08:48:09.417988  229421 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 08:48:09.474522  229421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:48:09.534389  229421 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 08:48:09.534412  229421 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 08:48:09.595176  229421 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 08:48:09.595197  229421 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 08:48:09.614032  229421 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 08:48:09.614054  229421 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 08:48:09.683006  229421 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 08:48:09.683026  229421 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 08:48:09.749770  229421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 08:48:09.906048  229421 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 08:48:09.906068  229421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 08:48:10.086666  229421 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 08:48:10.086744  229421 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 08:48:10.188555  229421 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 08:48:10.188624  229421 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 08:48:10.255655  229421 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 08:48:10.255734  229421 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 08:48:10.292490  229421 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 08:48:10.292551  229421 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 08:48:10.318856  229421 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:48:10.318918  229421 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 08:48:10.353433  229421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:48:15.303532  229421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.197825637s)
	I1123 08:48:17.107683  229421 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.633021896s)
	I1123 08:48:17.107851  229421 api_server.go:72] duration metric: took 8.560101234s to wait for apiserver process to appear ...
	I1123 08:48:17.107890  229421 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:48:17.107931  229421 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:48:17.108138  229421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.803357131s)
	I1123 08:48:17.117932  229421 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 08:48:17.120446  229421 api_server.go:141] control plane version: v1.34.1
	I1123 08:48:17.120520  229421 api_server.go:131] duration metric: took 12.600567ms to wait for apiserver health ...
	I1123 08:48:17.120532  229421 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:48:17.125381  229421 system_pods.go:59] 9 kube-system pods found
	I1123 08:48:17.125496  229421 system_pods.go:61] "coredns-66bc5c9577-2f96t" [1e3a238e-1b7f-4780-98e8-1ca450282eab] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 08:48:17.125535  229421 system_pods.go:61] "etcd-newest-cni-009152" [45e528b1-fc1a-43d9-bcd3-742447073748] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:48:17.125565  229421 system_pods.go:61] "kindnet-27cxr" [0433112f-46f3-4f6e-ac7a-f327bac4220f] Running
	I1123 08:48:17.125589  229421 system_pods.go:61] "kube-apiserver-newest-cni-009152" [db374a56-80b9-4625-b349-43e14e28795b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:48:17.125626  229421 system_pods.go:61] "kube-controller-manager-newest-cni-009152" [a0500ae6-4a1c-4497-875a-1096ed512bf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:48:17.125647  229421 system_pods.go:61] "kube-proxy-6rqcs" [5e20e1c3-53af-46c9-8717-3e0b65db8fc1] Running
	I1123 08:48:17.125670  229421 system_pods.go:61] "kube-scheduler-newest-cni-009152" [6ed99eb2-e2f5-4b9a-b695-632339e2b512] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:48:17.125709  229421 system_pods.go:61] "metrics-server-746fcd58dc-jjpvt" [a838c5f6-1a38-466f-9d8b-178bd3b8d3bb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 08:48:17.125730  229421 system_pods.go:61] "storage-provisioner" [efb1af07-ac6f-403e-b1e7-eb5b9b90d9f8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 08:48:17.125761  229421 system_pods.go:74] duration metric: took 5.213986ms to wait for pod list to return data ...
	I1123 08:48:17.125808  229421 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:48:17.128890  229421 default_sa.go:45] found service account: "default"
	I1123 08:48:17.128953  229421 default_sa.go:55] duration metric: took 3.124756ms for default service account to be created ...
	I1123 08:48:17.129001  229421 kubeadm.go:587] duration metric: took 8.581251171s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 08:48:17.129030  229421 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:48:17.132513  229421 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:48:17.132587  229421 node_conditions.go:123] node cpu capacity is 2
	I1123 08:48:17.132614  229421 node_conditions.go:105] duration metric: took 3.546225ms to run NodePressure ...
	I1123 08:48:17.132655  229421 start.go:242] waiting for startup goroutines ...
	I1123 08:48:17.163221  229421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.413408108s)
	I1123 08:48:17.163437  229421 addons.go:495] Verifying addon metrics-server=true in "newest-cni-009152"
	I1123 08:48:17.163526  229421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.810019981s)
	I1123 08:48:17.166740  229421 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-009152 addons enable metrics-server
	
	I1123 08:48:17.169774  229421 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1123 08:48:17.172797  229421 addons.go:530] duration metric: took 8.624645819s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1123 08:48:17.172905  229421 start.go:247] waiting for cluster config update ...
	I1123 08:48:17.172930  229421 start.go:256] writing updated cluster config ...
	I1123 08:48:17.173296  229421 ssh_runner.go:195] Run: rm -f paused
	I1123 08:48:17.254564  229421 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 08:48:17.257750  229421 out.go:179] * Done! kubectl is now configured to use "newest-cni-009152" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	fbce34324f7e9       1611cd07b61d5       8 seconds ago        Running             busybox                   0                   5f0b2e2adef5f       busybox                                                default
	c81fe9aa72108       138784d87c9c5       15 seconds ago       Running             coredns                   0                   692e20f85d4d9       coredns-66bc5c9577-qctlw                               kube-system
	3d82912652c69       ba04bb24b9575       15 seconds ago       Running             storage-provisioner       0                   d9205045f65a9       storage-provisioner                                    kube-system
	65dc1bda083b8       05baa95f5142d       56 seconds ago       Running             kube-proxy                0                   f6b807b01d3f9       kube-proxy-jrwr5                                       kube-system
	5ba4f4ccce243       b1a8c6f707935       56 seconds ago       Running             kindnet-cni               0                   0c78a74bb5346       kindnet-f2zrk                                          kube-system
	77d7d9411bb7e       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   c77abab5c615d       kube-controller-manager-default-k8s-diff-port-422900   kube-system
	633f3c0b9836a       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   1a6dad8102dd2       kube-scheduler-default-k8s-diff-port-422900            kube-system
	c24564cc452db       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   d1e5adb24eb7f       kube-apiserver-default-k8s-diff-port-422900            kube-system
	8732d9c3aa176       a1894772a478e       About a minute ago   Running             etcd                      0                   2b15dbba62907       etcd-default-k8s-diff-port-422900                      kube-system
	
	
	==> containerd <==
	Nov 23 08:48:05 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:05.966755505Z" level=info msg="connecting to shim 3d82912652c69d404d4d77e211bdd74d1b0de8b7a5cec57e067c47cae6edc5a4" address="unix:///run/containerd/s/7b2aa4fc9f79348feeb1744145364c6d26f011beaa4996e7190d3a47d4910cc4" protocol=ttrpc version=3
	Nov 23 08:48:06 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:06.015940361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qctlw,Uid:54e1b924-5413-4e3d-ad3c-51f6af499016,Namespace:kube-system,Attempt:0,} returns sandbox id \"692e20f85d4d97e68f27646a051153f87370bdd3aae26c32d463570dbd8a89a2\""
	Nov 23 08:48:06 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:06.044976817Z" level=info msg="CreateContainer within sandbox \"692e20f85d4d97e68f27646a051153f87370bdd3aae26c32d463570dbd8a89a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 23 08:48:06 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:06.058719077Z" level=info msg="Container c81fe9aa7210899126b9d90ca0fb809058988397087bf476687339ed5192440e: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:48:06 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:06.070770059Z" level=info msg="StartContainer for \"3d82912652c69d404d4d77e211bdd74d1b0de8b7a5cec57e067c47cae6edc5a4\" returns successfully"
	Nov 23 08:48:06 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:06.088974256Z" level=info msg="CreateContainer within sandbox \"692e20f85d4d97e68f27646a051153f87370bdd3aae26c32d463570dbd8a89a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c81fe9aa7210899126b9d90ca0fb809058988397087bf476687339ed5192440e\""
	Nov 23 08:48:06 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:06.090041189Z" level=info msg="StartContainer for \"c81fe9aa7210899126b9d90ca0fb809058988397087bf476687339ed5192440e\""
	Nov 23 08:48:06 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:06.105779559Z" level=info msg="connecting to shim c81fe9aa7210899126b9d90ca0fb809058988397087bf476687339ed5192440e" address="unix:///run/containerd/s/3e0e06640614723dd327238dd89dd54b8de4840c751689a55a593a27cdbc3313" protocol=ttrpc version=3
	Nov 23 08:48:06 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:06.217663959Z" level=info msg="StartContainer for \"c81fe9aa7210899126b9d90ca0fb809058988397087bf476687339ed5192440e\" returns successfully"
	Nov 23 08:48:09 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:09.812465338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:92f3c4e4-b38c-4d7b-b2fb-56d47cd1c156,Namespace:default,Attempt:0,}"
	Nov 23 08:48:09 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:09.867384046Z" level=info msg="connecting to shim 5f0b2e2adef5f6ff067d7ddbe4e04da3b735c7e192097b219488ea85ccb9684a" address="unix:///run/containerd/s/40e21a2fbdfd1b9e9e160ddef84ec73b78504abebf3c55f2739bd009673de7fa" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:48:10 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:10.017708565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:92f3c4e4-b38c-4d7b-b2fb-56d47cd1c156,Namespace:default,Attempt:0,} returns sandbox id \"5f0b2e2adef5f6ff067d7ddbe4e04da3b735c7e192097b219488ea85ccb9684a\""
	Nov 23 08:48:10 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:10.024030921Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.387378380Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.389679282Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937186"
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.408071065Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.413443904Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.414754098Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.390535285s"
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.414908053Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.428030728Z" level=info msg="CreateContainer within sandbox \"5f0b2e2adef5f6ff067d7ddbe4e04da3b735c7e192097b219488ea85ccb9684a\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.440547603Z" level=info msg="Container fbce34324f7e9e12d9eb802ef034e3aa8628e7f38e399229ab78a4da4c151b03: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.450453878Z" level=info msg="CreateContainer within sandbox \"5f0b2e2adef5f6ff067d7ddbe4e04da3b735c7e192097b219488ea85ccb9684a\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"fbce34324f7e9e12d9eb802ef034e3aa8628e7f38e399229ab78a4da4c151b03\""
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.453543753Z" level=info msg="StartContainer for \"fbce34324f7e9e12d9eb802ef034e3aa8628e7f38e399229ab78a4da4c151b03\""
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.465047045Z" level=info msg="connecting to shim fbce34324f7e9e12d9eb802ef034e3aa8628e7f38e399229ab78a4da4c151b03" address="unix:///run/containerd/s/40e21a2fbdfd1b9e9e160ddef84ec73b78504abebf3c55f2739bd009673de7fa" protocol=ttrpc version=3
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.581795459Z" level=info msg="StartContainer for \"fbce34324f7e9e12d9eb802ef034e3aa8628e7f38e399229ab78a4da4c151b03\" returns successfully"
	
	
	==> coredns [c81fe9aa7210899126b9d90ca0fb809058988397087bf476687339ed5192440e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35789 - 18734 "HINFO IN 5272406496239846288.2426256441528353838. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005418124s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-422900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-422900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=default-k8s-diff-port-422900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_47_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:47:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-422900
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:48:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:48:20 +0000   Sun, 23 Nov 2025 08:47:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:48:20 +0000   Sun, 23 Nov 2025 08:47:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:48:20 +0000   Sun, 23 Nov 2025 08:47:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:48:20 +0000   Sun, 23 Nov 2025 08:48:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-422900
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                d73c838c-8202-472f-9042-cce9ff16e283
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-qctlw                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     58s
	  kube-system                 etcd-default-k8s-diff-port-422900                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-f2zrk                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      58s
	  kube-system                 kube-apiserver-default-k8s-diff-port-422900             250m (12%)    0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-422900    200m (10%)    0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 kube-proxy-jrwr5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-scheduler-default-k8s-diff-port-422900             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 55s                kube-proxy       
	  Normal   NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 74s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node default-k8s-diff-port-422900 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node default-k8s-diff-port-422900 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node default-k8s-diff-port-422900 status is now: NodeHasSufficientPID
	  Normal   Starting                 74s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  63s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  63s                kubelet          Node default-k8s-diff-port-422900 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s                kubelet          Node default-k8s-diff-port-422900 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s                kubelet          Node default-k8s-diff-port-422900 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           59s                node-controller  Node default-k8s-diff-port-422900 event: Registered Node default-k8s-diff-port-422900 in Controller
	  Normal   NodeReady                16s                kubelet          Node default-k8s-diff-port-422900 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015154] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.511595] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034200] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753844] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.833249] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:37] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/22/fs': -2
	
	
	==> etcd [8732d9c3aa176b961a8886b66b2192dedc6027d6bb6eb829f53bcfb146373fb8] <==
	{"level":"warn","ts":"2025-11-23T08:47:11.483927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.561400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.601464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.608368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.646824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.677066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.733522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.756279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.781181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.822531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.856376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.882033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.919045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.947315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.979752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:12.034507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:12.081897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:12.288620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48554","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T08:47:23.689063Z","caller":"traceutil/trace.go:172","msg":"trace[969222846] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"142.365421ms","start":"2025-11-23T08:47:23.546678Z","end":"2025-11-23T08:47:23.689044Z","steps":["trace[969222846] 'process raft request'  (duration: 142.105479ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:47:23.689205Z","caller":"traceutil/trace.go:172","msg":"trace[996455750] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"142.411132ms","start":"2025-11-23T08:47:23.546788Z","end":"2025-11-23T08:47:23.689199Z","steps":["trace[996455750] 'process raft request'  (duration: 142.029901ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:47:23.689129Z","caller":"traceutil/trace.go:172","msg":"trace[525610800] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"143.017466ms","start":"2025-11-23T08:47:23.546100Z","end":"2025-11-23T08:47:23.689117Z","steps":["trace[525610800] 'process raft request'  (duration: 102.379373ms)","trace[525610800] 'compare'  (duration: 40.193131ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:47:23.875964Z","caller":"traceutil/trace.go:172","msg":"trace[1297937374] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"155.378345ms","start":"2025-11-23T08:47:23.720566Z","end":"2025-11-23T08:47:23.875944Z","steps":["trace[1297937374] 'process raft request'  (duration: 69.587077ms)","trace[1297937374] 'compare'  (duration: 82.71544ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:47:23.876077Z","caller":"traceutil/trace.go:172","msg":"trace[508316777] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"155.415514ms","start":"2025-11-23T08:47:23.720655Z","end":"2025-11-23T08:47:23.876071Z","steps":["trace[508316777] 'process raft request'  (duration: 152.32038ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:47:23.876119Z","caller":"traceutil/trace.go:172","msg":"trace[776766712] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"111.514187ms","start":"2025-11-23T08:47:23.764600Z","end":"2025-11-23T08:47:23.876115Z","steps":["trace[776766712] 'process raft request'  (duration: 108.414137ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:47:23.877187Z","caller":"traceutil/trace.go:172","msg":"trace[1907366165] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"104.214538ms","start":"2025-11-23T08:47:23.772951Z","end":"2025-11-23T08:47:23.877165Z","steps":["trace[1907366165] 'process raft request'  (duration: 100.098181ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:48:21 up  1:30,  0 user,  load average: 6.12, 4.60, 3.58
	Linux default-k8s-diff-port-422900 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5ba4f4ccce243d6bd5f8419e76671f6e4f63d89e07efa058612ddc013cea3d26] <==
	I1123 08:47:25.075838       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:47:25.165711       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:47:25.165843       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:47:25.165856       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:47:25.165872       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:47:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:47:25.368138       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:47:25.368156       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:47:25.368165       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:47:25.368852       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:47:55.369158       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 08:47:55.369274       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 08:47:55.369353       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 08:47:55.369502       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1123 08:47:56.569163       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:47:56.569202       1 metrics.go:72] Registering metrics
	I1123 08:47:56.569256       1 controller.go:711] "Syncing nftables rules"
	I1123 08:48:05.369491       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:48:05.369545       1 main.go:301] handling current node
	I1123 08:48:15.367326       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:48:15.367392       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c24564cc452db3082b6f04eff5a957e5d95cd8345c35167e34d0542828cd9c3a] <==
	I1123 08:47:13.927869       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1123 08:47:13.970297       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 08:47:13.975783       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 08:47:14.036945       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:47:14.052478       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:47:14.121747       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:47:14.124720       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:47:14.256500       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:47:14.306441       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:47:14.306466       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:47:16.312523       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:47:16.443844       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:47:16.618441       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:47:16.636742       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 08:47:16.638836       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:47:16.652965       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:47:17.423715       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:47:18.429785       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:47:18.477811       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:47:18.500006       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:47:23.169075       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:47:23.178509       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:47:23.324034       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:47:23.545196       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 08:48:19.717715       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:56522: use of closed network connection
	
	
	==> kube-controller-manager [77d7d9411bb7e0d4cfb89ca9086ea353d80c270805cdcdc9170342555057bf8c] <==
	I1123 08:47:22.506527       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:47:22.506701       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:47:22.506867       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-422900"
	I1123 08:47:22.507011       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 08:47:22.507119       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 08:47:22.507222       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 08:47:22.508136       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:47:22.509814       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:47:22.509827       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 08:47:22.509844       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:47:22.509856       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:47:22.509871       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:47:22.509878       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:47:22.510033       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:47:22.514922       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:47:22.514411       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:47:22.516760       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:47:22.518148       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:47:22.525218       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 08:47:22.525485       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:47:22.531091       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 08:47:22.540420       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:47:22.543091       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:47:22.573968       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:48:07.513591       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [65dc1bda083b8e1de067446e56ea3cbcd1faa3a76018a5ab231a1a2ef8c1abf0] <==
	I1123 08:47:25.692674       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:47:25.843509       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:47:26.043908       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:47:26.046156       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 08:47:26.046394       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:47:26.099459       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:47:26.099519       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:47:26.106700       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:47:26.107314       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:47:26.109240       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:47:26.111799       1 config.go:200] "Starting service config controller"
	I1123 08:47:26.117597       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:47:26.117784       1 config.go:309] "Starting node config controller"
	I1123 08:47:26.121085       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:47:26.121179       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:47:26.114836       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:47:26.121686       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:47:26.121764       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:47:26.114857       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:47:26.122542       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:47:26.123136       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:47:26.218000       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [633f3c0b9836ab275f8d03494d0b328d3dd8a33fdc9be07f245eb8c2995982e8] <==
	I1123 08:47:13.789280       1 serving.go:386] Generated self-signed cert in-memory
	I1123 08:47:17.487715       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:47:17.492069       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:47:17.505012       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:47:17.507539       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 08:47:17.507709       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 08:47:17.507849       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:47:17.534055       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:47:17.534256       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:47:17.534385       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:47:17.534455       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:47:17.609557       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 08:47:17.638023       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:47:17.637959       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:47:19 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:19.134501    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38e4568b9ca181975632ac03bdc1c733-usr-local-share-ca-certificates\") pod \"kube-controller-manager-default-k8s-diff-port-422900\" (UID: \"38e4568b9ca181975632ac03bdc1c733\") " pod="kube-system/kube-controller-manager-default-k8s-diff-port-422900"
	Nov 23 08:47:19 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:19.205248    1461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-422900" podStartSLOduration=0.205228606 podStartE2EDuration="205.228606ms" podCreationTimestamp="2025-11-23 08:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:47:19.187688399 +0000 UTC m=+0.811506267" watchObservedRunningTime="2025-11-23 08:47:19.205228606 +0000 UTC m=+0.829046482"
	Nov 23 08:47:19 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:19.205346    1461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-422900" podStartSLOduration=0.205340821 podStartE2EDuration="205.340821ms" podCreationTimestamp="2025-11-23 08:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:47:19.20503799 +0000 UTC m=+0.828855891" watchObservedRunningTime="2025-11-23 08:47:19.205340821 +0000 UTC m=+0.829158688"
	Nov 23 08:47:22 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:22.582528    1461 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:47:22 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:22.583549    1461 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:47:24 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:24.073777    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxx7h\" (UniqueName: \"kubernetes.io/projected/83f0d2e5-4c5a-443e-acbe-533cd427a3f5-kube-api-access-jxx7h\") pod \"kube-proxy-jrwr5\" (UID: \"83f0d2e5-4c5a-443e-acbe-533cd427a3f5\") " pod="kube-system/kube-proxy-jrwr5"
	Nov 23 08:47:24 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:24.073865    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/016a8003-854a-4072-bd80-6ecf03b5af32-cni-cfg\") pod \"kindnet-f2zrk\" (UID: \"016a8003-854a-4072-bd80-6ecf03b5af32\") " pod="kube-system/kindnet-f2zrk"
	Nov 23 08:47:24 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:24.073926    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/83f0d2e5-4c5a-443e-acbe-533cd427a3f5-kube-proxy\") pod \"kube-proxy-jrwr5\" (UID: \"83f0d2e5-4c5a-443e-acbe-533cd427a3f5\") " pod="kube-system/kube-proxy-jrwr5"
	Nov 23 08:47:24 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:24.073948    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/016a8003-854a-4072-bd80-6ecf03b5af32-xtables-lock\") pod \"kindnet-f2zrk\" (UID: \"016a8003-854a-4072-bd80-6ecf03b5af32\") " pod="kube-system/kindnet-f2zrk"
	Nov 23 08:47:24 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:24.073982    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/016a8003-854a-4072-bd80-6ecf03b5af32-lib-modules\") pod \"kindnet-f2zrk\" (UID: \"016a8003-854a-4072-bd80-6ecf03b5af32\") " pod="kube-system/kindnet-f2zrk"
	Nov 23 08:47:24 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:24.074002    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl4vd\" (UniqueName: \"kubernetes.io/projected/016a8003-854a-4072-bd80-6ecf03b5af32-kube-api-access-kl4vd\") pod \"kindnet-f2zrk\" (UID: \"016a8003-854a-4072-bd80-6ecf03b5af32\") " pod="kube-system/kindnet-f2zrk"
	Nov 23 08:47:24 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:24.074043    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83f0d2e5-4c5a-443e-acbe-533cd427a3f5-xtables-lock\") pod \"kube-proxy-jrwr5\" (UID: \"83f0d2e5-4c5a-443e-acbe-533cd427a3f5\") " pod="kube-system/kube-proxy-jrwr5"
	Nov 23 08:47:24 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:24.074068    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83f0d2e5-4c5a-443e-acbe-533cd427a3f5-lib-modules\") pod \"kube-proxy-jrwr5\" (UID: \"83f0d2e5-4c5a-443e-acbe-533cd427a3f5\") " pod="kube-system/kube-proxy-jrwr5"
	Nov 23 08:47:24 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:24.281210    1461 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 08:47:25 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:25.859384    1461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-f2zrk" podStartSLOduration=2.859356111 podStartE2EDuration="2.859356111s" podCreationTimestamp="2025-11-23 08:47:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:47:25.858874152 +0000 UTC m=+7.482692028" watchObservedRunningTime="2025-11-23 08:47:25.859356111 +0000 UTC m=+7.483173987"
	Nov 23 08:47:29 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:29.713148    1461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jrwr5" podStartSLOduration=6.713128868 podStartE2EDuration="6.713128868s" podCreationTimestamp="2025-11-23 08:47:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:47:25.940770858 +0000 UTC m=+7.564588734" watchObservedRunningTime="2025-11-23 08:47:29.713128868 +0000 UTC m=+11.336946752"
	Nov 23 08:48:05 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:48:05.398801    1461 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:48:05 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:48:05.528813    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5e808c7e-c721-46a8-96ed-969c255a51eb-tmp\") pod \"storage-provisioner\" (UID: \"5e808c7e-c721-46a8-96ed-969c255a51eb\") " pod="kube-system/storage-provisioner"
	Nov 23 08:48:05 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:48:05.529066    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz5tk\" (UniqueName: \"kubernetes.io/projected/5e808c7e-c721-46a8-96ed-969c255a51eb-kube-api-access-wz5tk\") pod \"storage-provisioner\" (UID: \"5e808c7e-c721-46a8-96ed-969c255a51eb\") " pod="kube-system/storage-provisioner"
	Nov 23 08:48:05 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:48:05.629740    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv289\" (UniqueName: \"kubernetes.io/projected/54e1b924-5413-4e3d-ad3c-51f6af499016-kube-api-access-lv289\") pod \"coredns-66bc5c9577-qctlw\" (UID: \"54e1b924-5413-4e3d-ad3c-51f6af499016\") " pod="kube-system/coredns-66bc5c9577-qctlw"
	Nov 23 08:48:05 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:48:05.629956    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54e1b924-5413-4e3d-ad3c-51f6af499016-config-volume\") pod \"coredns-66bc5c9577-qctlw\" (UID: \"54e1b924-5413-4e3d-ad3c-51f6af499016\") " pod="kube-system/coredns-66bc5c9577-qctlw"
	Nov 23 08:48:06 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:48:06.945854    1461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.945834205 podStartE2EDuration="41.945834205s" podCreationTimestamp="2025-11-23 08:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:48:06.930407535 +0000 UTC m=+48.554225403" watchObservedRunningTime="2025-11-23 08:48:06.945834205 +0000 UTC m=+48.569652081"
	Nov 23 08:48:09 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:48:09.485928    1461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qctlw" podStartSLOduration=46.485899232 podStartE2EDuration="46.485899232s" podCreationTimestamp="2025-11-23 08:47:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:48:06.958194163 +0000 UTC m=+48.582012056" watchObservedRunningTime="2025-11-23 08:48:09.485899232 +0000 UTC m=+51.109717108"
	Nov 23 08:48:09 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:48:09.670249    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f85ht\" (UniqueName: \"kubernetes.io/projected/92f3c4e4-b38c-4d7b-b2fb-56d47cd1c156-kube-api-access-f85ht\") pod \"busybox\" (UID: \"92f3c4e4-b38c-4d7b-b2fb-56d47cd1c156\") " pod="default/busybox"
	Nov 23 08:48:19 default-k8s-diff-port-422900 kubelet[1461]: E1123 08:48:19.716061    1461 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 192.168.85.2:44412->192.168.85.2:10010: read tcp 192.168.85.2:44412->192.168.85.2:10010: read: connection reset by peer
	
	
	==> storage-provisioner [3d82912652c69d404d4d77e211bdd74d1b0de8b7a5cec57e067c47cae6edc5a4] <==
	I1123 08:48:06.076233       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:48:06.101457       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:48:06.101529       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:48:06.107349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:06.118324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:48:06.118485       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:48:06.119524       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-422900_20e27392-4975-4bdb-badb-b3a986897ab5!
	I1123 08:48:06.122436       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f83257d8-5529-4c23-8339-a6b35debddd7", APIVersion:"v1", ResourceVersion:"464", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-422900_20e27392-4975-4bdb-badb-b3a986897ab5 became leader
	W1123 08:48:06.135670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:06.145145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:48:06.219948       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-422900_20e27392-4975-4bdb-badb-b3a986897ab5!
	W1123 08:48:08.149486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:08.158944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:10.167997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:10.173106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:12.176307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:12.185755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:14.188860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:14.194396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:16.198360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:16.211028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:18.214872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:18.220840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:20.229692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:20.237369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-422900 -n default-k8s-diff-port-422900
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-422900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-422900
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-422900:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "73fd58553e83a6bfc9438764d96f266e091c0db95ed497aecd3e247e9dd7e299",
	        "Created": "2025-11-23T08:46:49.639813081Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 222880,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:46:49.724512546Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/73fd58553e83a6bfc9438764d96f266e091c0db95ed497aecd3e247e9dd7e299/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/73fd58553e83a6bfc9438764d96f266e091c0db95ed497aecd3e247e9dd7e299/hostname",
	        "HostsPath": "/var/lib/docker/containers/73fd58553e83a6bfc9438764d96f266e091c0db95ed497aecd3e247e9dd7e299/hosts",
	        "LogPath": "/var/lib/docker/containers/73fd58553e83a6bfc9438764d96f266e091c0db95ed497aecd3e247e9dd7e299/73fd58553e83a6bfc9438764d96f266e091c0db95ed497aecd3e247e9dd7e299-json.log",
	        "Name": "/default-k8s-diff-port-422900",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-422900:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-422900",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "73fd58553e83a6bfc9438764d96f266e091c0db95ed497aecd3e247e9dd7e299",
	                "LowerDir": "/var/lib/docker/overlay2/4865d26f0d26d5c677e00ab1a67615b2bb27a6dcc3eab25dbdc7c868c9ef4a9f-init/diff:/var/lib/docker/overlay2/88c30082a717909d357f7d81c88a05ce3487a40d372ee6dc57fb9f012e0502da/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4865d26f0d26d5c677e00ab1a67615b2bb27a6dcc3eab25dbdc7c868c9ef4a9f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4865d26f0d26d5c677e00ab1a67615b2bb27a6dcc3eab25dbdc7c868c9ef4a9f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4865d26f0d26d5c677e00ab1a67615b2bb27a6dcc3eab25dbdc7c868c9ef4a9f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-422900",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-422900/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-422900",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-422900",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-422900",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c42cd8b8eaeefb867ee0509b6760df32e46b5e8d98e611258aa96a18705b5411",
	            "SandboxKey": "/var/run/docker/netns/c42cd8b8eaee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-422900": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:17:86:41:6e:ef",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0bcab11333d4f3fa0501963a863f2548dae4e6826f2110c6a56dead952835135",
	                    "EndpointID": "3885a910efff7067f7b170ad465dbcff76112057f693e260040723cb094ce32d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-422900",
	                        "73fd58553e83"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-422900 -n default-k8s-diff-port-422900
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-422900 logs -n 25
E1123 08:48:23.525452    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-422900 logs -n 25: (1.468334506s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable metrics-server -p embed-certs-230843 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-230843           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ stop    │ -p embed-certs-230843 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-230843           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:46 UTC │
	│ addons  │ enable dashboard -p embed-certs-230843 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-230843           │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ start   │ -p embed-certs-230843 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-230843           │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ image   │ no-preload-596617 image list --format=json                                                                                                                                                                                                          │ no-preload-596617            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ pause   │ -p no-preload-596617 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-596617            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ unpause │ -p no-preload-596617 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-596617            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ delete  │ -p no-preload-596617                                                                                                                                                                                                                                │ no-preload-596617            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ delete  │ -p no-preload-596617                                                                                                                                                                                                                                │ no-preload-596617            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ delete  │ -p disable-driver-mounts-142181                                                                                                                                                                                                                     │ disable-driver-mounts-142181 │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ start   │ -p default-k8s-diff-port-422900 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-422900 │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:48 UTC │
	│ image   │ embed-certs-230843 image list --format=json                                                                                                                                                                                                         │ embed-certs-230843           │ jenkins │ v1.37.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:47 UTC │
	│ pause   │ -p embed-certs-230843 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-230843           │ jenkins │ v1.37.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:47 UTC │
	│ unpause │ -p embed-certs-230843 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-230843           │ jenkins │ v1.37.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:47 UTC │
	│ delete  │ -p embed-certs-230843                                                                                                                                                                                                                               │ embed-certs-230843           │ jenkins │ v1.37.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:47 UTC │
	│ delete  │ -p embed-certs-230843                                                                                                                                                                                                                               │ embed-certs-230843           │ jenkins │ v1.37.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:47 UTC │
	│ start   │ -p newest-cni-009152 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-009152            │ jenkins │ v1.37.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:47 UTC │
	│ addons  │ enable metrics-server -p newest-cni-009152 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-009152            │ jenkins │ v1.37.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:47 UTC │
	│ stop    │ -p newest-cni-009152 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-009152            │ jenkins │ v1.37.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:48 UTC │
	│ addons  │ enable dashboard -p newest-cni-009152 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-009152            │ jenkins │ v1.37.0 │ 23 Nov 25 08:48 UTC │ 23 Nov 25 08:48 UTC │
	│ start   │ -p newest-cni-009152 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-009152            │ jenkins │ v1.37.0 │ 23 Nov 25 08:48 UTC │ 23 Nov 25 08:48 UTC │
	│ image   │ newest-cni-009152 image list --format=json                                                                                                                                                                                                          │ newest-cni-009152            │ jenkins │ v1.37.0 │ 23 Nov 25 08:48 UTC │ 23 Nov 25 08:48 UTC │
	│ pause   │ -p newest-cni-009152 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-009152            │ jenkins │ v1.37.0 │ 23 Nov 25 08:48 UTC │ 23 Nov 25 08:48 UTC │
	│ unpause │ -p newest-cni-009152 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-009152            │ jenkins │ v1.37.0 │ 23 Nov 25 08:48 UTC │ 23 Nov 25 08:48 UTC │
	│ delete  │ -p newest-cni-009152                                                                                                                                                                                                                                │ newest-cni-009152            │ jenkins │ v1.37.0 │ 23 Nov 25 08:48 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:48:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:48:00.580040  229421 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:48:00.580265  229421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:48:00.580396  229421 out.go:374] Setting ErrFile to fd 2...
	I1123 08:48:00.580408  229421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:48:00.580877  229421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
	I1123 08:48:00.581308  229421 out.go:368] Setting JSON to false
	I1123 08:48:00.582275  229421 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5430,"bootTime":1763882251,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 08:48:00.582347  229421 start.go:143] virtualization:  
	I1123 08:48:00.585205  229421 out.go:179] * [newest-cni-009152] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:48:00.589274  229421 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:48:00.589321  229421 notify.go:221] Checking for updates...
	I1123 08:48:00.595665  229421 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:48:00.598605  229421 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:48:00.601574  229421 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube
	I1123 08:48:00.604565  229421 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:48:00.607537  229421 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:48:00.610934  229421 config.go:182] Loaded profile config "newest-cni-009152": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:48:00.611591  229421 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:48:00.641956  229421 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:48:00.642069  229421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:48:00.700923  229421 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:48:00.691215945 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:48:00.701032  229421 docker.go:319] overlay module found
	I1123 08:48:00.704336  229421 out.go:179] * Using the docker driver based on existing profile
	I1123 08:48:00.707211  229421 start.go:309] selected driver: docker
	I1123 08:48:00.707230  229421 start.go:927] validating driver "docker" against &{Name:newest-cni-009152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-009152 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:48:00.707346  229421 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:48:00.708031  229421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:48:00.780324  229421 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:48:00.765745703 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:48:00.780654  229421 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 08:48:00.780686  229421 cni.go:84] Creating CNI manager for ""
	I1123 08:48:00.780751  229421 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:48:00.780791  229421 start.go:353] cluster config:
	{Name:newest-cni-009152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-009152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:48:00.784038  229421 out.go:179] * Starting "newest-cni-009152" primary control-plane node in "newest-cni-009152" cluster
	I1123 08:48:00.786876  229421 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:48:00.789801  229421 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:48:00.792625  229421 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:48:00.792667  229421 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1123 08:48:00.792693  229421 cache.go:65] Caching tarball of preloaded images
	I1123 08:48:00.792777  229421 preload.go:238] Found /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1123 08:48:00.792786  229421 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 08:48:00.792899  229421 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/newest-cni-009152/config.json ...
	I1123 08:48:00.793118  229421 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:48:00.813090  229421 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:48:00.813112  229421 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:48:00.813126  229421 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:48:00.813155  229421 start.go:360] acquireMachinesLock for newest-cni-009152: {Name:mkfad18d37682d570ef490054702f76faece800c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:48:00.813212  229421 start.go:364] duration metric: took 35.816µs to acquireMachinesLock for "newest-cni-009152"
	I1123 08:48:00.813234  229421 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:48:00.813244  229421 fix.go:54] fixHost starting: 
	I1123 08:48:00.813533  229421 cli_runner.go:164] Run: docker container inspect newest-cni-009152 --format={{.State.Status}}
	I1123 08:48:00.832238  229421 fix.go:112] recreateIfNeeded on newest-cni-009152: state=Stopped err=<nil>
	W1123 08:48:00.832268  229421 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 08:47:59.320137  222471 node_ready.go:57] node "default-k8s-diff-port-422900" has "Ready":"False" status (will retry)
	W1123 08:48:01.818502  222471 node_ready.go:57] node "default-k8s-diff-port-422900" has "Ready":"False" status (will retry)
	I1123 08:48:00.835396  229421 out.go:252] * Restarting existing docker container for "newest-cni-009152" ...
	I1123 08:48:00.835489  229421 cli_runner.go:164] Run: docker start newest-cni-009152
	I1123 08:48:01.095308  229421 cli_runner.go:164] Run: docker container inspect newest-cni-009152 --format={{.State.Status}}
	I1123 08:48:01.115676  229421 kic.go:430] container "newest-cni-009152" state is running.
	I1123 08:48:01.116079  229421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-009152
	I1123 08:48:01.138972  229421 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/newest-cni-009152/config.json ...
	I1123 08:48:01.139218  229421 machine.go:94] provisionDockerMachine start ...
	I1123 08:48:01.139280  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:01.164025  229421 main.go:143] libmachine: Using SSH client type: native
	I1123 08:48:01.164530  229421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1123 08:48:01.164546  229421 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:48:01.165243  229421 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 08:48:04.321701  229421 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-009152
	
	I1123 08:48:04.321722  229421 ubuntu.go:182] provisioning hostname "newest-cni-009152"
	I1123 08:48:04.321785  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:04.339380  229421 main.go:143] libmachine: Using SSH client type: native
	I1123 08:48:04.339700  229421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1123 08:48:04.339711  229421 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-009152 && echo "newest-cni-009152" | sudo tee /etc/hostname
	I1123 08:48:04.507459  229421 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-009152
	
	I1123 08:48:04.507547  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:04.526224  229421 main.go:143] libmachine: Using SSH client type: native
	I1123 08:48:04.526554  229421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1123 08:48:04.526577  229421 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-009152' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-009152/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-009152' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:48:04.678325  229421 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:48:04.678354  229421 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-2339/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-2339/.minikube}
	I1123 08:48:04.678415  229421 ubuntu.go:190] setting up certificates
	I1123 08:48:04.678425  229421 provision.go:84] configureAuth start
	I1123 08:48:04.678493  229421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-009152
	I1123 08:48:04.696845  229421 provision.go:143] copyHostCerts
	I1123 08:48:04.696922  229421 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem, removing ...
	I1123 08:48:04.696940  229421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem
	I1123 08:48:04.697022  229421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/ca.pem (1078 bytes)
	I1123 08:48:04.697129  229421 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem, removing ...
	I1123 08:48:04.697140  229421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem
	I1123 08:48:04.697168  229421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/cert.pem (1123 bytes)
	I1123 08:48:04.697276  229421 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem, removing ...
	I1123 08:48:04.697286  229421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem
	I1123 08:48:04.697313  229421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-2339/.minikube/key.pem (1675 bytes)
	I1123 08:48:04.697392  229421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem org=jenkins.newest-cni-009152 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-009152]
	I1123 08:48:05.089396  229421 provision.go:177] copyRemoteCerts
	I1123 08:48:05.089475  229421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:48:05.089528  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:05.106868  229421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/newest-cni-009152/id_rsa Username:docker}
	I1123 08:48:05.218000  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:48:05.237668  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:48:05.256033  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:48:05.274295  229421 provision.go:87] duration metric: took 595.847239ms to configureAuth
	I1123 08:48:05.274321  229421 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:48:05.274530  229421 config.go:182] Loaded profile config "newest-cni-009152": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:48:05.274543  229421 machine.go:97] duration metric: took 4.135313834s to provisionDockerMachine
	I1123 08:48:05.274552  229421 start.go:293] postStartSetup for "newest-cni-009152" (driver="docker")
	I1123 08:48:05.274561  229421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:48:05.274610  229421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:48:05.274661  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:05.292704  229421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/newest-cni-009152/id_rsa Username:docker}
	I1123 08:48:05.402991  229421 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:48:05.407371  229421 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:48:05.407397  229421 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:48:05.407408  229421 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-2339/.minikube/addons for local assets ...
	I1123 08:48:05.407460  229421 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-2339/.minikube/files for local assets ...
	I1123 08:48:05.407550  229421 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem -> 41512.pem in /etc/ssl/certs
	I1123 08:48:05.407656  229421 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:48:05.415499  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem --> /etc/ssl/certs/41512.pem (1708 bytes)
	I1123 08:48:05.446181  229421 start.go:296] duration metric: took 171.615244ms for postStartSetup
	I1123 08:48:05.446299  229421 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:48:05.446342  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:05.474676  229421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/newest-cni-009152/id_rsa Username:docker}
	I1123 08:48:05.582898  229421 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:48:05.587855  229421 fix.go:56] duration metric: took 4.774604736s for fixHost
	I1123 08:48:05.587880  229421 start.go:83] releasing machines lock for "newest-cni-009152", held for 4.774655378s
	I1123 08:48:05.587946  229421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-009152
	I1123 08:48:05.604807  229421 ssh_runner.go:195] Run: cat /version.json
	I1123 08:48:05.604974  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:05.605170  229421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:48:05.605314  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:05.622711  229421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/newest-cni-009152/id_rsa Username:docker}
	I1123 08:48:05.629394  229421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/newest-cni-009152/id_rsa Username:docker}
	I1123 08:48:05.729429  229421 ssh_runner.go:195] Run: systemctl --version
	I1123 08:48:05.839367  229421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:48:05.846647  229421 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:48:05.846789  229421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:48:05.859158  229421 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:48:05.859229  229421 start.go:496] detecting cgroup driver to use...
	I1123 08:48:05.859276  229421 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:48:05.859352  229421 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:48:05.891062  229421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:48:05.912053  229421 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:48:05.912197  229421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:48:05.932058  229421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:48:05.953005  229421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:48:06.163900  229421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:48:06.314301  229421 docker.go:234] disabling docker service ...
	I1123 08:48:06.314413  229421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:48:06.330820  229421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:48:06.345523  229421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:48:06.470783  229421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:48:06.600536  229421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:48:06.616363  229421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:48:06.634685  229421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 08:48:06.644920  229421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:48:06.656025  229421 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 08:48:06.656180  229421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 08:48:06.668789  229421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:48:06.678506  229421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:48:06.687517  229421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:48:06.701469  229421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:48:06.709379  229421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:48:06.720326  229421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:48:06.729614  229421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:48:06.739646  229421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:48:06.747908  229421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:48:06.755691  229421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:48:06.876943  229421 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:48:07.076766  229421 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:48:07.076890  229421 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:48:07.081365  229421 start.go:564] Will wait 60s for crictl version
	I1123 08:48:07.081473  229421 ssh_runner.go:195] Run: which crictl
	I1123 08:48:07.085252  229421 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:48:07.112497  229421 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:48:07.112562  229421 ssh_runner.go:195] Run: containerd --version
	I1123 08:48:07.133026  229421 ssh_runner.go:195] Run: containerd --version
	I1123 08:48:07.156560  229421 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 08:48:07.159666  229421 cli_runner.go:164] Run: docker network inspect newest-cni-009152 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:48:07.176054  229421 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 08:48:07.179868  229421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:48:07.192716  229421 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 08:48:07.195656  229421 kubeadm.go:884] updating cluster {Name:newest-cni-009152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-009152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:48:07.195814  229421 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:48:07.195901  229421 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:48:07.220885  229421 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:48:07.220913  229421 containerd.go:534] Images already preloaded, skipping extraction
	I1123 08:48:07.220969  229421 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:48:07.245884  229421 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:48:07.245907  229421 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:48:07.245921  229421 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1123 08:48:07.246026  229421 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-009152 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-009152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:48:07.246089  229421 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:48:07.271202  229421 cni.go:84] Creating CNI manager for ""
	I1123 08:48:07.271278  229421 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:48:07.271306  229421 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 08:48:07.271334  229421 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-009152 NodeName:newest-cni-009152 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:48:07.271452  229421 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-009152"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:48:07.271520  229421 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:48:07.279801  229421 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:48:07.279870  229421 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:48:07.287268  229421 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1123 08:48:07.300368  229421 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:48:07.314014  229421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1123 08:48:07.327313  229421 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:48:07.331351  229421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:48:07.342245  229421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:48:07.483475  229421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:48:07.500759  229421 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/newest-cni-009152 for IP: 192.168.76.2
	I1123 08:48:07.500822  229421 certs.go:195] generating shared ca certs ...
	I1123 08:48:07.500851  229421 certs.go:227] acquiring lock for ca certs: {Name:mke0fc62f41acbef5eb3e84af3a3b8f9858bd1fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:48:07.501018  229421 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-2339/.minikube/ca.key
	I1123 08:48:07.501086  229421 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.key
	I1123 08:48:07.501108  229421 certs.go:257] generating profile certs ...
	I1123 08:48:07.501253  229421 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/newest-cni-009152/client.key
	I1123 08:48:07.501375  229421 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/newest-cni-009152/apiserver.key.ab0208e7
	I1123 08:48:07.501484  229421 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/newest-cni-009152/proxy-client.key
	I1123 08:48:07.501645  229421 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151.pem (1338 bytes)
	W1123 08:48:07.501708  229421 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151_empty.pem, impossibly tiny 0 bytes
	I1123 08:48:07.501736  229421 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 08:48:07.501804  229421 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:48:07.501874  229421 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:48:07.501929  229421 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/certs/key.pem (1675 bytes)
	I1123 08:48:07.502014  229421 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem (1708 bytes)
	I1123 08:48:07.502675  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:48:07.525114  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:48:07.545868  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:48:07.573240  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:48:07.595183  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/newest-cni-009152/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 08:48:07.618043  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/newest-cni-009152/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 08:48:07.645670  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/newest-cni-009152/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:48:07.670538  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/newest-cni-009152/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:48:07.696686  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/ssl/certs/41512.pem --> /usr/share/ca-certificates/41512.pem (1708 bytes)
	I1123 08:48:07.717536  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:48:07.737652  229421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-2339/.minikube/certs/4151.pem --> /usr/share/ca-certificates/4151.pem (1338 bytes)
	I1123 08:48:07.775860  229421 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:48:07.790750  229421 ssh_runner.go:195] Run: openssl version
	I1123 08:48:07.797210  229421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4151.pem && ln -fs /usr/share/ca-certificates/4151.pem /etc/ssl/certs/4151.pem"
	I1123 08:48:07.806727  229421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4151.pem
	I1123 08:48:07.810532  229421 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:02 /usr/share/ca-certificates/4151.pem
	I1123 08:48:07.810591  229421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4151.pem
	I1123 08:48:07.855410  229421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4151.pem /etc/ssl/certs/51391683.0"
	I1123 08:48:07.863813  229421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41512.pem && ln -fs /usr/share/ca-certificates/41512.pem /etc/ssl/certs/41512.pem"
	I1123 08:48:07.872253  229421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41512.pem
	I1123 08:48:07.876353  229421 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:02 /usr/share/ca-certificates/41512.pem
	I1123 08:48:07.876423  229421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41512.pem
	I1123 08:48:07.918130  229421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41512.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:48:07.928752  229421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:48:07.937491  229421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:48:07.942162  229421 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:48:07.942272  229421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:48:07.984268  229421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:48:07.992400  229421 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:48:07.996358  229421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 08:48:08.039219  229421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 08:48:08.084186  229421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 08:48:08.156484  229421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 08:48:08.242726  229421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 08:48:08.350770  229421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 08:48:08.422353  229421 kubeadm.go:401] StartCluster: {Name:newest-cni-009152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-009152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:48:08.422493  229421 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:48:08.422599  229421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:48:08.471286  229421 cri.go:89] found id: "716f656958115b476db8b3d9c659d338e6a2b4d1f87f1868d91c6f2184c70649"
	I1123 08:48:08.471352  229421 cri.go:89] found id: "ddd51d511f888522e584ed8709efba7357686c844f772619cd8773eba928139e"
	I1123 08:48:08.471370  229421 cri.go:89] found id: "4ffb307ab190b3e1ff6a1406c7bf6f69843646f21a7da65cd752bb94c40e82c0"
	I1123 08:48:08.471390  229421 cri.go:89] found id: "6c58cdad0d6313772221b757d8fa63638924cccdf9e9d7bbf2beb6813a79c535"
	I1123 08:48:08.471432  229421 cri.go:89] found id: "902418d2981f97a115cdb0ede99d88cd007b2db3240cae7efd77c1679d90e61e"
	I1123 08:48:08.471454  229421 cri.go:89] found id: "92ef837b4105380bccb8d8576672d181ac258d2d1ad4e45968334f0f89a30820"
	I1123 08:48:08.471472  229421 cri.go:89] found id: "d453ae4cd1b9dff9ea395efb45c3124cff49a2026c5f8691b7e0495d0432a5f8"
	I1123 08:48:08.471515  229421 cri.go:89] found id: ""
	I1123 08:48:08.471602  229421 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1123 08:48:08.510835  229421 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"173d20f1e53779ab42aeb1598e4f201d51312a4584734b0739b1749fef6d90e4","pid":898,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/173d20f1e53779ab42aeb1598e4f201d51312a4584734b0739b1749fef6d90e4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/173d20f1e53779ab42aeb1598e4f201d51312a4584734b0739b1749fef6d90e4/rootfs","created":"2025-11-23T08:48:08.23566237Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"173d20f1e53779ab42aeb1598e4f201d51312a4584734b0739b1749fef6d90e4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-009152_252826eb0083650d177174ae5b43e593","io.kubernetes.cri.sandbox-memory":"0","io.
kubernetes.cri.sandbox-name":"etcd-newest-cni-009152","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"252826eb0083650d177174ae5b43e593"},"owner":"root"},{"ociVersion":"1.2.1","id":"6c2d6373498a20e36926c04dad32fc3eb54ec820798cd4f0a06f6e5d73f29ed0","pid":913,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c2d6373498a20e36926c04dad32fc3eb54ec820798cd4f0a06f6e5d73f29ed0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c2d6373498a20e36926c04dad32fc3eb54ec820798cd4f0a06f6e5d73f29ed0/rootfs","created":"2025-11-23T08:48:08.282438647Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"6c2d6373498a20e36926c04dad32fc3eb54ec820798cd4f0a06f6e5d73f29ed0","io.kubernetes.cri.sandbox-log-direct
ory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-009152_e0f5ead57064c4ed80d6bd6c76760288","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-009152","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e0f5ead57064c4ed80d6bd6c76760288"},"owner":"root"},{"ociVersion":"1.2.1","id":"716f656958115b476db8b3d9c659d338e6a2b4d1f87f1868d91c6f2184c70649","pid":0,"status":"stopped","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/716f656958115b476db8b3d9c659d338e6a2b4d1f87f1868d91c6f2184c70649","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/716f656958115b476db8b3d9c659d338e6a2b4d1f87f1868d91c6f2184c70649/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"173d20f1e53779ab42aeb1598e4f201d51312a4584734b0739b1749fef6d90e4"
,"io.kubernetes.cri.sandbox-name":"etcd-newest-cni-009152","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"252826eb0083650d177174ae5b43e593"},"owner":"root"},{"ociVersion":"1.2.1","id":"b52f8bf998b33df8226ed6493ac4efda8cc0486d15e2d080b9cb4d058ee6007f","pid":930,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b52f8bf998b33df8226ed6493ac4efda8cc0486d15e2d080b9cb4d058ee6007f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b52f8bf998b33df8226ed6493ac4efda8cc0486d15e2d080b9cb4d058ee6007f/rootfs","created":"2025-11-23T08:48:08.316251053Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"b52f8bf998b33df8226ed6493ac4efda8cc0486d15e2d080b9cb4d058ee6007f","io.kubernetes.cri.sandbox-log-d
irectory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-009152_8d8a1e0cc09a0133e35438ddf9c67296","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-009152","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8d8a1e0cc09a0133e35438ddf9c67296"},"owner":"root"},{"ociVersion":"1.2.1","id":"fe9c0ab120d1aaf95726bd9f3f5508b86f5a967edf4fed3ac863bb84c39a52d8","pid":945,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe9c0ab120d1aaf95726bd9f3f5508b86f5a967edf4fed3ac863bb84c39a52d8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe9c0ab120d1aaf95726bd9f3f5508b86f5a967edf4fed3ac863bb84c39a52d8/rootfs","created":"2025-11-23T08:48:08.340324045Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.k
ubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"fe9c0ab120d1aaf95726bd9f3f5508b86f5a967edf4fed3ac863bb84c39a52d8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-009152_b4a7a5eb75e7073e58143d1a525e35d5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-009152","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b4a7a5eb75e7073e58143d1a525e35d5"},"owner":"root"}]
	I1123 08:48:08.511067  229421 cri.go:126] list returned 5 containers
	I1123 08:48:08.511097  229421 cri.go:129] container: {ID:173d20f1e53779ab42aeb1598e4f201d51312a4584734b0739b1749fef6d90e4 Status:running}
	I1123 08:48:08.511125  229421 cri.go:131] skipping 173d20f1e53779ab42aeb1598e4f201d51312a4584734b0739b1749fef6d90e4 - not in ps
	I1123 08:48:08.511160  229421 cri.go:129] container: {ID:6c2d6373498a20e36926c04dad32fc3eb54ec820798cd4f0a06f6e5d73f29ed0 Status:running}
	I1123 08:48:08.511185  229421 cri.go:131] skipping 6c2d6373498a20e36926c04dad32fc3eb54ec820798cd4f0a06f6e5d73f29ed0 - not in ps
	I1123 08:48:08.511211  229421 cri.go:129] container: {ID:716f656958115b476db8b3d9c659d338e6a2b4d1f87f1868d91c6f2184c70649 Status:stopped}
	I1123 08:48:08.511249  229421 cri.go:135] skipping {716f656958115b476db8b3d9c659d338e6a2b4d1f87f1868d91c6f2184c70649 stopped}: state = "stopped", want "paused"
	I1123 08:48:08.511277  229421 cri.go:129] container: {ID:b52f8bf998b33df8226ed6493ac4efda8cc0486d15e2d080b9cb4d058ee6007f Status:running}
	I1123 08:48:08.511300  229421 cri.go:131] skipping b52f8bf998b33df8226ed6493ac4efda8cc0486d15e2d080b9cb4d058ee6007f - not in ps
	I1123 08:48:08.511336  229421 cri.go:129] container: {ID:fe9c0ab120d1aaf95726bd9f3f5508b86f5a967edf4fed3ac863bb84c39a52d8 Status:running}
	I1123 08:48:08.511362  229421 cri.go:131] skipping fe9c0ab120d1aaf95726bd9f3f5508b86f5a967edf4fed3ac863bb84c39a52d8 - not in ps
	I1123 08:48:08.511454  229421 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:48:08.521267  229421 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 08:48:08.521332  229421 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 08:48:08.521465  229421 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 08:48:08.534177  229421 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:48:08.534879  229421 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-009152" does not appear in /home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:48:08.535208  229421 kubeconfig.go:62] /home/jenkins/minikube-integration/21966-2339/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-009152" cluster setting kubeconfig missing "newest-cni-009152" context setting]
	I1123 08:48:08.535715  229421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/kubeconfig: {Name:mka042f83263da2d190b70c2277735bf705fab5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:48:08.537553  229421 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 08:48:08.546234  229421 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 08:48:08.546317  229421 kubeadm.go:602] duration metric: took 24.94986ms to restartPrimaryControlPlane
	I1123 08:48:08.546344  229421 kubeadm.go:403] duration metric: took 124.00232ms to StartCluster
	I1123 08:48:08.546373  229421 settings.go:142] acquiring lock: {Name:mkfb77243b31dfe604b438e7da3f1bce2ba7b5a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:48:08.546458  229421 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:48:08.547429  229421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/kubeconfig: {Name:mka042f83263da2d190b70c2277735bf705fab5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:48:08.547695  229421 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:48:08.548072  229421 config.go:182] Loaded profile config "newest-cni-009152": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:48:08.548147  229421 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:48:08.548396  229421 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-009152"
	I1123 08:48:08.548424  229421 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-009152"
	W1123 08:48:08.548490  229421 addons.go:248] addon storage-provisioner should already be in state true
	I1123 08:48:08.548528  229421 host.go:66] Checking if "newest-cni-009152" exists ...
	I1123 08:48:08.548457  229421 addons.go:70] Setting default-storageclass=true in profile "newest-cni-009152"
	I1123 08:48:08.548667  229421 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-009152"
	I1123 08:48:08.548971  229421 cli_runner.go:164] Run: docker container inspect newest-cni-009152 --format={{.State.Status}}
	I1123 08:48:08.548463  229421 addons.go:70] Setting dashboard=true in profile "newest-cni-009152"
	I1123 08:48:08.549481  229421 addons.go:239] Setting addon dashboard=true in "newest-cni-009152"
	W1123 08:48:08.549488  229421 addons.go:248] addon dashboard should already be in state true
	I1123 08:48:08.549506  229421 host.go:66] Checking if "newest-cni-009152" exists ...
	I1123 08:48:08.549893  229421 cli_runner.go:164] Run: docker container inspect newest-cni-009152 --format={{.State.Status}}
	I1123 08:48:08.550237  229421 cli_runner.go:164] Run: docker container inspect newest-cni-009152 --format={{.State.Status}}
	I1123 08:48:08.556864  229421 out.go:179] * Verifying Kubernetes components...
	I1123 08:48:08.548472  229421 addons.go:70] Setting metrics-server=true in profile "newest-cni-009152"
	I1123 08:48:08.557227  229421 addons.go:239] Setting addon metrics-server=true in "newest-cni-009152"
	W1123 08:48:08.557259  229421 addons.go:248] addon metrics-server should already be in state true
	I1123 08:48:08.557402  229421 host.go:66] Checking if "newest-cni-009152" exists ...
	I1123 08:48:08.558713  229421 cli_runner.go:164] Run: docker container inspect newest-cni-009152 --format={{.State.Status}}
	I1123 08:48:08.562321  229421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:48:08.626500  229421 addons.go:239] Setting addon default-storageclass=true in "newest-cni-009152"
	W1123 08:48:08.626528  229421 addons.go:248] addon default-storageclass should already be in state true
	I1123 08:48:08.626553  229421 host.go:66] Checking if "newest-cni-009152" exists ...
	I1123 08:48:08.626960  229421 cli_runner.go:164] Run: docker container inspect newest-cni-009152 --format={{.State.Status}}
	I1123 08:48:08.641566  229421 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:48:08.644610  229421 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1123 08:48:08.644885  229421 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:48:08.644933  229421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:48:08.645010  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:08.650316  229421 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 08:48:08.650350  229421 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 08:48:08.650424  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:08.656483  229421 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 08:48:08.659424  229421 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1123 08:48:04.318926  222471 node_ready.go:57] node "default-k8s-diff-port-422900" has "Ready":"False" status (will retry)
	I1123 08:48:05.818344  222471 node_ready.go:49] node "default-k8s-diff-port-422900" is "Ready"
	I1123 08:48:05.818368  222471 node_ready.go:38] duration metric: took 41.002870002s for node "default-k8s-diff-port-422900" to be "Ready" ...
	I1123 08:48:05.818383  222471 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:48:05.818437  222471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:48:05.834937  222471 api_server.go:72] duration metric: took 42.130112502s to wait for apiserver process to appear ...
	I1123 08:48:05.834958  222471 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:48:05.835012  222471 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 08:48:05.861996  222471 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 08:48:05.863198  222471 api_server.go:141] control plane version: v1.34.1
	I1123 08:48:05.863220  222471 api_server.go:131] duration metric: took 28.256681ms to wait for apiserver health ...
	I1123 08:48:05.863230  222471 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:48:05.876222  222471 system_pods.go:59] 8 kube-system pods found
	I1123 08:48:05.876261  222471 system_pods.go:61] "coredns-66bc5c9577-qctlw" [54e1b924-5413-4e3d-ad3c-51f6af499016] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:48:05.876269  222471 system_pods.go:61] "etcd-default-k8s-diff-port-422900" [b1097f2c-a920-47df-8a6a-ab3b1f003a40] Running
	I1123 08:48:05.876275  222471 system_pods.go:61] "kindnet-f2zrk" [016a8003-854a-4072-bd80-6ecf03b5af32] Running
	I1123 08:48:05.876279  222471 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-422900" [6b52ca07-9c05-48d2-bc44-a7e79de91ca1] Running
	I1123 08:48:05.876283  222471 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-422900" [d6fb362a-72ec-4434-a8f9-8a378f153e0b] Running
	I1123 08:48:05.876287  222471 system_pods.go:61] "kube-proxy-jrwr5" [83f0d2e5-4c5a-443e-acbe-533cd427a3f5] Running
	I1123 08:48:05.876290  222471 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-422900" [a6752c80-7231-4330-bc95-6b68b22b3696] Running
	I1123 08:48:05.876295  222471 system_pods.go:61] "storage-provisioner" [5e808c7e-c721-46a8-96ed-969c255a51eb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:48:05.876302  222471 system_pods.go:74] duration metric: took 13.066445ms to wait for pod list to return data ...
	I1123 08:48:05.876311  222471 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:48:05.879409  222471 default_sa.go:45] found service account: "default"
	I1123 08:48:05.879483  222471 default_sa.go:55] duration metric: took 3.153818ms for default service account to be created ...
	I1123 08:48:05.879507  222471 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:48:05.883487  222471 system_pods.go:86] 8 kube-system pods found
	I1123 08:48:05.883569  222471 system_pods.go:89] "coredns-66bc5c9577-qctlw" [54e1b924-5413-4e3d-ad3c-51f6af499016] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:48:05.883592  222471 system_pods.go:89] "etcd-default-k8s-diff-port-422900" [b1097f2c-a920-47df-8a6a-ab3b1f003a40] Running
	I1123 08:48:05.883629  222471 system_pods.go:89] "kindnet-f2zrk" [016a8003-854a-4072-bd80-6ecf03b5af32] Running
	I1123 08:48:05.883653  222471 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-422900" [6b52ca07-9c05-48d2-bc44-a7e79de91ca1] Running
	I1123 08:48:05.883674  222471 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-422900" [d6fb362a-72ec-4434-a8f9-8a378f153e0b] Running
	I1123 08:48:05.883719  222471 system_pods.go:89] "kube-proxy-jrwr5" [83f0d2e5-4c5a-443e-acbe-533cd427a3f5] Running
	I1123 08:48:05.883744  222471 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-422900" [a6752c80-7231-4330-bc95-6b68b22b3696] Running
	I1123 08:48:05.883766  222471 system_pods.go:89] "storage-provisioner" [5e808c7e-c721-46a8-96ed-969c255a51eb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:48:05.883872  222471 retry.go:31] will retry after 307.730013ms: missing components: kube-dns
	I1123 08:48:06.196026  222471 system_pods.go:86] 8 kube-system pods found
	I1123 08:48:06.196136  222471 system_pods.go:89] "coredns-66bc5c9577-qctlw" [54e1b924-5413-4e3d-ad3c-51f6af499016] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:48:06.196182  222471 system_pods.go:89] "etcd-default-k8s-diff-port-422900" [b1097f2c-a920-47df-8a6a-ab3b1f003a40] Running
	I1123 08:48:06.196218  222471 system_pods.go:89] "kindnet-f2zrk" [016a8003-854a-4072-bd80-6ecf03b5af32] Running
	I1123 08:48:06.196238  222471 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-422900" [6b52ca07-9c05-48d2-bc44-a7e79de91ca1] Running
	I1123 08:48:06.196277  222471 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-422900" [d6fb362a-72ec-4434-a8f9-8a378f153e0b] Running
	I1123 08:48:06.196301  222471 system_pods.go:89] "kube-proxy-jrwr5" [83f0d2e5-4c5a-443e-acbe-533cd427a3f5] Running
	I1123 08:48:06.196323  222471 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-422900" [a6752c80-7231-4330-bc95-6b68b22b3696] Running
	I1123 08:48:06.196373  222471 system_pods.go:89] "storage-provisioner" [5e808c7e-c721-46a8-96ed-969c255a51eb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:48:06.196418  222471 retry.go:31] will retry after 350.493058ms: missing components: kube-dns
	I1123 08:48:06.552818  222471 system_pods.go:86] 8 kube-system pods found
	I1123 08:48:06.552900  222471 system_pods.go:89] "coredns-66bc5c9577-qctlw" [54e1b924-5413-4e3d-ad3c-51f6af499016] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:48:06.552921  222471 system_pods.go:89] "etcd-default-k8s-diff-port-422900" [b1097f2c-a920-47df-8a6a-ab3b1f003a40] Running
	I1123 08:48:06.552942  222471 system_pods.go:89] "kindnet-f2zrk" [016a8003-854a-4072-bd80-6ecf03b5af32] Running
	I1123 08:48:06.552979  222471 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-422900" [6b52ca07-9c05-48d2-bc44-a7e79de91ca1] Running
	I1123 08:48:06.552997  222471 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-422900" [d6fb362a-72ec-4434-a8f9-8a378f153e0b] Running
	I1123 08:48:06.553018  222471 system_pods.go:89] "kube-proxy-jrwr5" [83f0d2e5-4c5a-443e-acbe-533cd427a3f5] Running
	I1123 08:48:06.553054  222471 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-422900" [a6752c80-7231-4330-bc95-6b68b22b3696] Running
	I1123 08:48:06.553081  222471 system_pods.go:89] "storage-provisioner" [5e808c7e-c721-46a8-96ed-969c255a51eb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:48:06.553112  222471 retry.go:31] will retry after 345.301251ms: missing components: kube-dns
	I1123 08:48:06.906318  222471 system_pods.go:86] 8 kube-system pods found
	I1123 08:48:06.906350  222471 system_pods.go:89] "coredns-66bc5c9577-qctlw" [54e1b924-5413-4e3d-ad3c-51f6af499016] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:48:06.906357  222471 system_pods.go:89] "etcd-default-k8s-diff-port-422900" [b1097f2c-a920-47df-8a6a-ab3b1f003a40] Running
	I1123 08:48:06.906364  222471 system_pods.go:89] "kindnet-f2zrk" [016a8003-854a-4072-bd80-6ecf03b5af32] Running
	I1123 08:48:06.906370  222471 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-422900" [6b52ca07-9c05-48d2-bc44-a7e79de91ca1] Running
	I1123 08:48:06.906374  222471 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-422900" [d6fb362a-72ec-4434-a8f9-8a378f153e0b] Running
	I1123 08:48:06.906378  222471 system_pods.go:89] "kube-proxy-jrwr5" [83f0d2e5-4c5a-443e-acbe-533cd427a3f5] Running
	I1123 08:48:06.906383  222471 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-422900" [a6752c80-7231-4330-bc95-6b68b22b3696] Running
	I1123 08:48:06.906388  222471 system_pods.go:89] "storage-provisioner" [5e808c7e-c721-46a8-96ed-969c255a51eb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:48:06.906403  222471 retry.go:31] will retry after 424.887908ms: missing components: kube-dns
	I1123 08:48:07.336647  222471 system_pods.go:86] 8 kube-system pods found
	I1123 08:48:07.336675  222471 system_pods.go:89] "coredns-66bc5c9577-qctlw" [54e1b924-5413-4e3d-ad3c-51f6af499016] Running
	I1123 08:48:07.336683  222471 system_pods.go:89] "etcd-default-k8s-diff-port-422900" [b1097f2c-a920-47df-8a6a-ab3b1f003a40] Running
	I1123 08:48:07.336689  222471 system_pods.go:89] "kindnet-f2zrk" [016a8003-854a-4072-bd80-6ecf03b5af32] Running
	I1123 08:48:07.336694  222471 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-422900" [6b52ca07-9c05-48d2-bc44-a7e79de91ca1] Running
	I1123 08:48:07.336698  222471 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-422900" [d6fb362a-72ec-4434-a8f9-8a378f153e0b] Running
	I1123 08:48:07.336703  222471 system_pods.go:89] "kube-proxy-jrwr5" [83f0d2e5-4c5a-443e-acbe-533cd427a3f5] Running
	I1123 08:48:07.336708  222471 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-422900" [a6752c80-7231-4330-bc95-6b68b22b3696] Running
	I1123 08:48:07.336712  222471 system_pods.go:89] "storage-provisioner" [5e808c7e-c721-46a8-96ed-969c255a51eb] Running
	I1123 08:48:07.336719  222471 system_pods.go:126] duration metric: took 1.457190611s to wait for k8s-apps to be running ...
	I1123 08:48:07.336726  222471 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:48:07.336780  222471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:48:07.356255  222471 system_svc.go:56] duration metric: took 19.519641ms WaitForService to wait for kubelet
	I1123 08:48:07.356282  222471 kubeadm.go:587] duration metric: took 43.651461152s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:48:07.356298  222471 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:48:07.359129  222471 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:48:07.359155  222471 node_conditions.go:123] node cpu capacity is 2
	I1123 08:48:07.359170  222471 node_conditions.go:105] duration metric: took 2.866956ms to run NodePressure ...
	I1123 08:48:07.359232  222471 start.go:242] waiting for startup goroutines ...
	I1123 08:48:07.359245  222471 start.go:247] waiting for cluster config update ...
	I1123 08:48:07.359256  222471 start.go:256] writing updated cluster config ...
	I1123 08:48:07.359597  222471 ssh_runner.go:195] Run: rm -f paused
	I1123 08:48:07.364226  222471 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:48:07.367842  222471 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qctlw" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:07.372827  222471 pod_ready.go:94] pod "coredns-66bc5c9577-qctlw" is "Ready"
	I1123 08:48:07.372848  222471 pod_ready.go:86] duration metric: took 4.983453ms for pod "coredns-66bc5c9577-qctlw" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:07.375299  222471 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-422900" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:07.390495  222471 pod_ready.go:94] pod "etcd-default-k8s-diff-port-422900" is "Ready"
	I1123 08:48:07.390573  222471 pod_ready.go:86] duration metric: took 15.202298ms for pod "etcd-default-k8s-diff-port-422900" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:07.394739  222471 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-422900" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:07.402177  222471 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-422900" is "Ready"
	I1123 08:48:07.402254  222471 pod_ready.go:86] duration metric: took 7.441426ms for pod "kube-apiserver-default-k8s-diff-port-422900" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:07.407443  222471 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-422900" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:07.771605  222471 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-422900" is "Ready"
	I1123 08:48:07.771634  222471 pod_ready.go:86] duration metric: took 364.12813ms for pod "kube-controller-manager-default-k8s-diff-port-422900" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:07.969574  222471 pod_ready.go:83] waiting for pod "kube-proxy-jrwr5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:08.368482  222471 pod_ready.go:94] pod "kube-proxy-jrwr5" is "Ready"
	I1123 08:48:08.368511  222471 pod_ready.go:86] duration metric: took 398.901022ms for pod "kube-proxy-jrwr5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:08.570122  222471 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-422900" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:08.968182  222471 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-422900" is "Ready"
	I1123 08:48:08.968207  222471 pod_ready.go:86] duration metric: took 398.061382ms for pod "kube-scheduler-default-k8s-diff-port-422900" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:48:08.968218  222471 pod_ready.go:40] duration metric: took 1.603964206s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:48:09.102910  222471 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 08:48:09.107910  222471 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-422900" cluster and "default" namespace by default
	I1123 08:48:08.662313  229421 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 08:48:08.662340  229421 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 08:48:08.662408  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:08.709822  229421 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:48:08.709843  229421 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:48:08.709902  229421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009152
	I1123 08:48:08.726707  229421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/newest-cni-009152/id_rsa Username:docker}
	I1123 08:48:08.732799  229421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/newest-cni-009152/id_rsa Username:docker}
	I1123 08:48:08.739352  229421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/newest-cni-009152/id_rsa Username:docker}
	I1123 08:48:08.755880  229421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/newest-cni-009152/id_rsa Username:docker}
	I1123 08:48:08.954074  229421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:48:08.973390  229421 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:48:08.973522  229421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:48:09.105676  229421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:48:09.304753  229421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:48:09.338080  229421 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 08:48:09.338106  229421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1123 08:48:09.417965  229421 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 08:48:09.417988  229421 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 08:48:09.474522  229421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:48:09.534389  229421 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 08:48:09.534412  229421 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 08:48:09.595176  229421 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 08:48:09.595197  229421 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 08:48:09.614032  229421 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 08:48:09.614054  229421 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 08:48:09.683006  229421 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 08:48:09.683026  229421 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 08:48:09.749770  229421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 08:48:09.906048  229421 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 08:48:09.906068  229421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 08:48:10.086666  229421 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 08:48:10.086744  229421 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 08:48:10.188555  229421 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 08:48:10.188624  229421 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 08:48:10.255655  229421 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 08:48:10.255734  229421 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 08:48:10.292490  229421 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 08:48:10.292551  229421 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 08:48:10.318856  229421 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:48:10.318918  229421 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 08:48:10.353433  229421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:48:15.303532  229421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.197825637s)
	I1123 08:48:17.107683  229421 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.633021896s)
	I1123 08:48:17.107851  229421 api_server.go:72] duration metric: took 8.560101234s to wait for apiserver process to appear ...
	I1123 08:48:17.107890  229421 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:48:17.107931  229421 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:48:17.108138  229421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.803357131s)
	I1123 08:48:17.117932  229421 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 08:48:17.120446  229421 api_server.go:141] control plane version: v1.34.1
	I1123 08:48:17.120520  229421 api_server.go:131] duration metric: took 12.600567ms to wait for apiserver health ...
	I1123 08:48:17.120532  229421 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:48:17.125381  229421 system_pods.go:59] 9 kube-system pods found
	I1123 08:48:17.125496  229421 system_pods.go:61] "coredns-66bc5c9577-2f96t" [1e3a238e-1b7f-4780-98e8-1ca450282eab] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 08:48:17.125535  229421 system_pods.go:61] "etcd-newest-cni-009152" [45e528b1-fc1a-43d9-bcd3-742447073748] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:48:17.125565  229421 system_pods.go:61] "kindnet-27cxr" [0433112f-46f3-4f6e-ac7a-f327bac4220f] Running
	I1123 08:48:17.125589  229421 system_pods.go:61] "kube-apiserver-newest-cni-009152" [db374a56-80b9-4625-b349-43e14e28795b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:48:17.125626  229421 system_pods.go:61] "kube-controller-manager-newest-cni-009152" [a0500ae6-4a1c-4497-875a-1096ed512bf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:48:17.125647  229421 system_pods.go:61] "kube-proxy-6rqcs" [5e20e1c3-53af-46c9-8717-3e0b65db8fc1] Running
	I1123 08:48:17.125670  229421 system_pods.go:61] "kube-scheduler-newest-cni-009152" [6ed99eb2-e2f5-4b9a-b695-632339e2b512] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:48:17.125709  229421 system_pods.go:61] "metrics-server-746fcd58dc-jjpvt" [a838c5f6-1a38-466f-9d8b-178bd3b8d3bb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 08:48:17.125730  229421 system_pods.go:61] "storage-provisioner" [efb1af07-ac6f-403e-b1e7-eb5b9b90d9f8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 08:48:17.125761  229421 system_pods.go:74] duration metric: took 5.213986ms to wait for pod list to return data ...
	I1123 08:48:17.125808  229421 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:48:17.128890  229421 default_sa.go:45] found service account: "default"
	I1123 08:48:17.128953  229421 default_sa.go:55] duration metric: took 3.124756ms for default service account to be created ...
	I1123 08:48:17.129001  229421 kubeadm.go:587] duration metric: took 8.581251171s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 08:48:17.129030  229421 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:48:17.132513  229421 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:48:17.132587  229421 node_conditions.go:123] node cpu capacity is 2
	I1123 08:48:17.132614  229421 node_conditions.go:105] duration metric: took 3.546225ms to run NodePressure ...
	I1123 08:48:17.132655  229421 start.go:242] waiting for startup goroutines ...
	I1123 08:48:17.163221  229421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.413408108s)
	I1123 08:48:17.163437  229421 addons.go:495] Verifying addon metrics-server=true in "newest-cni-009152"
	I1123 08:48:17.163526  229421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.810019981s)
	I1123 08:48:17.166740  229421 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-009152 addons enable metrics-server
	
	I1123 08:48:17.169774  229421 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1123 08:48:17.172797  229421 addons.go:530] duration metric: took 8.624645819s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1123 08:48:17.172905  229421 start.go:247] waiting for cluster config update ...
	I1123 08:48:17.172930  229421 start.go:256] writing updated cluster config ...
	I1123 08:48:17.173296  229421 ssh_runner.go:195] Run: rm -f paused
	I1123 08:48:17.254564  229421 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 08:48:17.257750  229421 out.go:179] * Done! kubectl is now configured to use "newest-cni-009152" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	fbce34324f7e9       1611cd07b61d5       11 seconds ago       Running             busybox                   0                   5f0b2e2adef5f       busybox                                                default
	c81fe9aa72108       138784d87c9c5       18 seconds ago       Running             coredns                   0                   692e20f85d4d9       coredns-66bc5c9577-qctlw                               kube-system
	3d82912652c69       ba04bb24b9575       18 seconds ago       Running             storage-provisioner       0                   d9205045f65a9       storage-provisioner                                    kube-system
	65dc1bda083b8       05baa95f5142d       58 seconds ago       Running             kube-proxy                0                   f6b807b01d3f9       kube-proxy-jrwr5                                       kube-system
	5ba4f4ccce243       b1a8c6f707935       59 seconds ago       Running             kindnet-cni               0                   0c78a74bb5346       kindnet-f2zrk                                          kube-system
	77d7d9411bb7e       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   c77abab5c615d       kube-controller-manager-default-k8s-diff-port-422900   kube-system
	633f3c0b9836a       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   1a6dad8102dd2       kube-scheduler-default-k8s-diff-port-422900            kube-system
	c24564cc452db       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   d1e5adb24eb7f       kube-apiserver-default-k8s-diff-port-422900            kube-system
	8732d9c3aa176       a1894772a478e       About a minute ago   Running             etcd                      0                   2b15dbba62907       etcd-default-k8s-diff-port-422900                      kube-system
	
	
	==> containerd <==
	Nov 23 08:48:05 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:05.966755505Z" level=info msg="connecting to shim 3d82912652c69d404d4d77e211bdd74d1b0de8b7a5cec57e067c47cae6edc5a4" address="unix:///run/containerd/s/7b2aa4fc9f79348feeb1744145364c6d26f011beaa4996e7190d3a47d4910cc4" protocol=ttrpc version=3
	Nov 23 08:48:06 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:06.015940361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qctlw,Uid:54e1b924-5413-4e3d-ad3c-51f6af499016,Namespace:kube-system,Attempt:0,} returns sandbox id \"692e20f85d4d97e68f27646a051153f87370bdd3aae26c32d463570dbd8a89a2\""
	Nov 23 08:48:06 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:06.044976817Z" level=info msg="CreateContainer within sandbox \"692e20f85d4d97e68f27646a051153f87370bdd3aae26c32d463570dbd8a89a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 23 08:48:06 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:06.058719077Z" level=info msg="Container c81fe9aa7210899126b9d90ca0fb809058988397087bf476687339ed5192440e: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:48:06 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:06.070770059Z" level=info msg="StartContainer for \"3d82912652c69d404d4d77e211bdd74d1b0de8b7a5cec57e067c47cae6edc5a4\" returns successfully"
	Nov 23 08:48:06 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:06.088974256Z" level=info msg="CreateContainer within sandbox \"692e20f85d4d97e68f27646a051153f87370bdd3aae26c32d463570dbd8a89a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c81fe9aa7210899126b9d90ca0fb809058988397087bf476687339ed5192440e\""
	Nov 23 08:48:06 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:06.090041189Z" level=info msg="StartContainer for \"c81fe9aa7210899126b9d90ca0fb809058988397087bf476687339ed5192440e\""
	Nov 23 08:48:06 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:06.105779559Z" level=info msg="connecting to shim c81fe9aa7210899126b9d90ca0fb809058988397087bf476687339ed5192440e" address="unix:///run/containerd/s/3e0e06640614723dd327238dd89dd54b8de4840c751689a55a593a27cdbc3313" protocol=ttrpc version=3
	Nov 23 08:48:06 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:06.217663959Z" level=info msg="StartContainer for \"c81fe9aa7210899126b9d90ca0fb809058988397087bf476687339ed5192440e\" returns successfully"
	Nov 23 08:48:09 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:09.812465338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:92f3c4e4-b38c-4d7b-b2fb-56d47cd1c156,Namespace:default,Attempt:0,}"
	Nov 23 08:48:09 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:09.867384046Z" level=info msg="connecting to shim 5f0b2e2adef5f6ff067d7ddbe4e04da3b735c7e192097b219488ea85ccb9684a" address="unix:///run/containerd/s/40e21a2fbdfd1b9e9e160ddef84ec73b78504abebf3c55f2739bd009673de7fa" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:48:10 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:10.017708565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:92f3c4e4-b38c-4d7b-b2fb-56d47cd1c156,Namespace:default,Attempt:0,} returns sandbox id \"5f0b2e2adef5f6ff067d7ddbe4e04da3b735c7e192097b219488ea85ccb9684a\""
	Nov 23 08:48:10 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:10.024030921Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.387378380Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.389679282Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937186"
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.408071065Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.413443904Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.414754098Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.390535285s"
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.414908053Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.428030728Z" level=info msg="CreateContainer within sandbox \"5f0b2e2adef5f6ff067d7ddbe4e04da3b735c7e192097b219488ea85ccb9684a\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.440547603Z" level=info msg="Container fbce34324f7e9e12d9eb802ef034e3aa8628e7f38e399229ab78a4da4c151b03: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.450453878Z" level=info msg="CreateContainer within sandbox \"5f0b2e2adef5f6ff067d7ddbe4e04da3b735c7e192097b219488ea85ccb9684a\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"fbce34324f7e9e12d9eb802ef034e3aa8628e7f38e399229ab78a4da4c151b03\""
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.453543753Z" level=info msg="StartContainer for \"fbce34324f7e9e12d9eb802ef034e3aa8628e7f38e399229ab78a4da4c151b03\""
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.465047045Z" level=info msg="connecting to shim fbce34324f7e9e12d9eb802ef034e3aa8628e7f38e399229ab78a4da4c151b03" address="unix:///run/containerd/s/40e21a2fbdfd1b9e9e160ddef84ec73b78504abebf3c55f2739bd009673de7fa" protocol=ttrpc version=3
	Nov 23 08:48:12 default-k8s-diff-port-422900 containerd[760]: time="2025-11-23T08:48:12.581795459Z" level=info msg="StartContainer for \"fbce34324f7e9e12d9eb802ef034e3aa8628e7f38e399229ab78a4da4c151b03\" returns successfully"
	
	
	==> coredns [c81fe9aa7210899126b9d90ca0fb809058988397087bf476687339ed5192440e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35789 - 18734 "HINFO IN 5272406496239846288.2426256441528353838. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005418124s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-422900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-422900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=default-k8s-diff-port-422900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_47_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:47:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-422900
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:48:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:48:20 +0000   Sun, 23 Nov 2025 08:47:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:48:20 +0000   Sun, 23 Nov 2025 08:47:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:48:20 +0000   Sun, 23 Nov 2025 08:47:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:48:20 +0000   Sun, 23 Nov 2025 08:48:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-422900
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                d73c838c-8202-472f-9042-cce9ff16e283
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  kube-system                 coredns-66bc5c9577-qctlw                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     61s
	  kube-system                 etcd-default-k8s-diff-port-422900                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         65s
	  kube-system                 kindnet-f2zrk                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      61s
	  kube-system                 kube-apiserver-default-k8s-diff-port-422900             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-422900    200m (10%)    0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-proxy-jrwr5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-scheduler-default-k8s-diff-port-422900             100m (5%)     0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 58s                kube-proxy       
	  Normal   NodeAllocatableEnforced  77s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 77s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  77s (x8 over 77s)  kubelet          Node default-k8s-diff-port-422900 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    77s (x8 over 77s)  kubelet          Node default-k8s-diff-port-422900 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     77s (x7 over 77s)  kubelet          Node default-k8s-diff-port-422900 status is now: NodeHasSufficientPID
	  Normal   Starting                 77s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 66s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 66s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  66s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  66s                kubelet          Node default-k8s-diff-port-422900 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s                kubelet          Node default-k8s-diff-port-422900 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s                kubelet          Node default-k8s-diff-port-422900 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           62s                node-controller  Node default-k8s-diff-port-422900 event: Registered Node default-k8s-diff-port-422900 in Controller
	  Normal   NodeReady                19s                kubelet          Node default-k8s-diff-port-422900 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015154] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.511595] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034200] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753844] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.833249] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:37] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/22/fs': -2
	
	
	==> etcd [8732d9c3aa176b961a8886b66b2192dedc6027d6bb6eb829f53bcfb146373fb8] <==
	{"level":"warn","ts":"2025-11-23T08:47:11.483927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.561400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.601464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.608368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.646824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.677066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.733522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.756279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.781181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.822531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.856376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.882033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.919045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.947315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:11.979752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:12.034507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:12.081897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:47:12.288620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48554","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T08:47:23.689063Z","caller":"traceutil/trace.go:172","msg":"trace[969222846] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"142.365421ms","start":"2025-11-23T08:47:23.546678Z","end":"2025-11-23T08:47:23.689044Z","steps":["trace[969222846] 'process raft request'  (duration: 142.105479ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:47:23.689205Z","caller":"traceutil/trace.go:172","msg":"trace[996455750] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"142.411132ms","start":"2025-11-23T08:47:23.546788Z","end":"2025-11-23T08:47:23.689199Z","steps":["trace[996455750] 'process raft request'  (duration: 142.029901ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:47:23.689129Z","caller":"traceutil/trace.go:172","msg":"trace[525610800] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"143.017466ms","start":"2025-11-23T08:47:23.546100Z","end":"2025-11-23T08:47:23.689117Z","steps":["trace[525610800] 'process raft request'  (duration: 102.379373ms)","trace[525610800] 'compare'  (duration: 40.193131ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:47:23.875964Z","caller":"traceutil/trace.go:172","msg":"trace[1297937374] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"155.378345ms","start":"2025-11-23T08:47:23.720566Z","end":"2025-11-23T08:47:23.875944Z","steps":["trace[1297937374] 'process raft request'  (duration: 69.587077ms)","trace[1297937374] 'compare'  (duration: 82.71544ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:47:23.876077Z","caller":"traceutil/trace.go:172","msg":"trace[508316777] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"155.415514ms","start":"2025-11-23T08:47:23.720655Z","end":"2025-11-23T08:47:23.876071Z","steps":["trace[508316777] 'process raft request'  (duration: 152.32038ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:47:23.876119Z","caller":"traceutil/trace.go:172","msg":"trace[776766712] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"111.514187ms","start":"2025-11-23T08:47:23.764600Z","end":"2025-11-23T08:47:23.876115Z","steps":["trace[776766712] 'process raft request'  (duration: 108.414137ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:47:23.877187Z","caller":"traceutil/trace.go:172","msg":"trace[1907366165] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"104.214538ms","start":"2025-11-23T08:47:23.772951Z","end":"2025-11-23T08:47:23.877165Z","steps":["trace[1907366165] 'process raft request'  (duration: 100.098181ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:48:24 up  1:30,  0 user,  load average: 6.12, 4.60, 3.58
	Linux default-k8s-diff-port-422900 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5ba4f4ccce243d6bd5f8419e76671f6e4f63d89e07efa058612ddc013cea3d26] <==
	I1123 08:47:25.075838       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:47:25.165711       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:47:25.165843       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:47:25.165856       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:47:25.165872       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:47:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:47:25.368138       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:47:25.368156       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:47:25.368165       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:47:25.368852       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:47:55.369158       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 08:47:55.369274       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 08:47:55.369353       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 08:47:55.369502       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1123 08:47:56.569163       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:47:56.569202       1 metrics.go:72] Registering metrics
	I1123 08:47:56.569256       1 controller.go:711] "Syncing nftables rules"
	I1123 08:48:05.369491       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:48:05.369545       1 main.go:301] handling current node
	I1123 08:48:15.367326       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:48:15.367392       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c24564cc452db3082b6f04eff5a957e5d95cd8345c35167e34d0542828cd9c3a] <==
	I1123 08:47:13.927869       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1123 08:47:13.970297       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 08:47:13.975783       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 08:47:14.036945       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:47:14.052478       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:47:14.121747       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:47:14.124720       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:47:14.256500       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:47:14.306441       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:47:14.306466       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:47:16.312523       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:47:16.443844       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:47:16.618441       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:47:16.636742       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 08:47:16.638836       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:47:16.652965       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:47:17.423715       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:47:18.429785       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:47:18.477811       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:47:18.500006       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:47:23.169075       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:47:23.178509       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:47:23.324034       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:47:23.545196       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 08:48:19.717715       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:56522: use of closed network connection
	
	
	==> kube-controller-manager [77d7d9411bb7e0d4cfb89ca9086ea353d80c270805cdcdc9170342555057bf8c] <==
	I1123 08:47:22.506527       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:47:22.506701       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:47:22.506867       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-422900"
	I1123 08:47:22.507011       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 08:47:22.507119       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 08:47:22.507222       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 08:47:22.508136       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:47:22.509814       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:47:22.509827       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 08:47:22.509844       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:47:22.509856       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:47:22.509871       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:47:22.509878       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:47:22.510033       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:47:22.514922       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:47:22.514411       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:47:22.516760       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:47:22.518148       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:47:22.525218       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 08:47:22.525485       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:47:22.531091       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 08:47:22.540420       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:47:22.543091       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:47:22.573968       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:48:07.513591       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [65dc1bda083b8e1de067446e56ea3cbcd1faa3a76018a5ab231a1a2ef8c1abf0] <==
	I1123 08:47:25.692674       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:47:25.843509       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:47:26.043908       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:47:26.046156       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 08:47:26.046394       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:47:26.099459       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:47:26.099519       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:47:26.106700       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:47:26.107314       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:47:26.109240       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:47:26.111799       1 config.go:200] "Starting service config controller"
	I1123 08:47:26.117597       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:47:26.117784       1 config.go:309] "Starting node config controller"
	I1123 08:47:26.121085       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:47:26.121179       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:47:26.114836       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:47:26.121686       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:47:26.121764       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:47:26.114857       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:47:26.122542       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:47:26.123136       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:47:26.218000       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [633f3c0b9836ab275f8d03494d0b328d3dd8a33fdc9be07f245eb8c2995982e8] <==
	I1123 08:47:13.789280       1 serving.go:386] Generated self-signed cert in-memory
	I1123 08:47:17.487715       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:47:17.492069       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:47:17.505012       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:47:17.507539       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 08:47:17.507709       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 08:47:17.507849       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:47:17.534055       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:47:17.534256       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:47:17.534385       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:47:17.534455       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:47:17.609557       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 08:47:17.638023       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:47:17.637959       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:47:19 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:19.134501    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38e4568b9ca181975632ac03bdc1c733-usr-local-share-ca-certificates\") pod \"kube-controller-manager-default-k8s-diff-port-422900\" (UID: \"38e4568b9ca181975632ac03bdc1c733\") " pod="kube-system/kube-controller-manager-default-k8s-diff-port-422900"
	Nov 23 08:47:19 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:19.205248    1461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-422900" podStartSLOduration=0.205228606 podStartE2EDuration="205.228606ms" podCreationTimestamp="2025-11-23 08:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:47:19.187688399 +0000 UTC m=+0.811506267" watchObservedRunningTime="2025-11-23 08:47:19.205228606 +0000 UTC m=+0.829046482"
	Nov 23 08:47:19 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:19.205346    1461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-422900" podStartSLOduration=0.205340821 podStartE2EDuration="205.340821ms" podCreationTimestamp="2025-11-23 08:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:47:19.20503799 +0000 UTC m=+0.828855891" watchObservedRunningTime="2025-11-23 08:47:19.205340821 +0000 UTC m=+0.829158688"
	Nov 23 08:47:22 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:22.582528    1461 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:47:22 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:22.583549    1461 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:47:24 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:24.073777    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxx7h\" (UniqueName: \"kubernetes.io/projected/83f0d2e5-4c5a-443e-acbe-533cd427a3f5-kube-api-access-jxx7h\") pod \"kube-proxy-jrwr5\" (UID: \"83f0d2e5-4c5a-443e-acbe-533cd427a3f5\") " pod="kube-system/kube-proxy-jrwr5"
	Nov 23 08:47:24 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:24.073865    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/016a8003-854a-4072-bd80-6ecf03b5af32-cni-cfg\") pod \"kindnet-f2zrk\" (UID: \"016a8003-854a-4072-bd80-6ecf03b5af32\") " pod="kube-system/kindnet-f2zrk"
	Nov 23 08:47:24 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:24.073926    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/83f0d2e5-4c5a-443e-acbe-533cd427a3f5-kube-proxy\") pod \"kube-proxy-jrwr5\" (UID: \"83f0d2e5-4c5a-443e-acbe-533cd427a3f5\") " pod="kube-system/kube-proxy-jrwr5"
	Nov 23 08:47:24 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:24.073948    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/016a8003-854a-4072-bd80-6ecf03b5af32-xtables-lock\") pod \"kindnet-f2zrk\" (UID: \"016a8003-854a-4072-bd80-6ecf03b5af32\") " pod="kube-system/kindnet-f2zrk"
	Nov 23 08:47:24 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:24.073982    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/016a8003-854a-4072-bd80-6ecf03b5af32-lib-modules\") pod \"kindnet-f2zrk\" (UID: \"016a8003-854a-4072-bd80-6ecf03b5af32\") " pod="kube-system/kindnet-f2zrk"
	Nov 23 08:47:24 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:24.074002    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl4vd\" (UniqueName: \"kubernetes.io/projected/016a8003-854a-4072-bd80-6ecf03b5af32-kube-api-access-kl4vd\") pod \"kindnet-f2zrk\" (UID: \"016a8003-854a-4072-bd80-6ecf03b5af32\") " pod="kube-system/kindnet-f2zrk"
	Nov 23 08:47:24 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:24.074043    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83f0d2e5-4c5a-443e-acbe-533cd427a3f5-xtables-lock\") pod \"kube-proxy-jrwr5\" (UID: \"83f0d2e5-4c5a-443e-acbe-533cd427a3f5\") " pod="kube-system/kube-proxy-jrwr5"
	Nov 23 08:47:24 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:24.074068    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83f0d2e5-4c5a-443e-acbe-533cd427a3f5-lib-modules\") pod \"kube-proxy-jrwr5\" (UID: \"83f0d2e5-4c5a-443e-acbe-533cd427a3f5\") " pod="kube-system/kube-proxy-jrwr5"
	Nov 23 08:47:24 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:24.281210    1461 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 08:47:25 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:25.859384    1461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-f2zrk" podStartSLOduration=2.859356111 podStartE2EDuration="2.859356111s" podCreationTimestamp="2025-11-23 08:47:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:47:25.858874152 +0000 UTC m=+7.482692028" watchObservedRunningTime="2025-11-23 08:47:25.859356111 +0000 UTC m=+7.483173987"
	Nov 23 08:47:29 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:47:29.713148    1461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jrwr5" podStartSLOduration=6.713128868 podStartE2EDuration="6.713128868s" podCreationTimestamp="2025-11-23 08:47:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:47:25.940770858 +0000 UTC m=+7.564588734" watchObservedRunningTime="2025-11-23 08:47:29.713128868 +0000 UTC m=+11.336946752"
	Nov 23 08:48:05 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:48:05.398801    1461 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:48:05 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:48:05.528813    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5e808c7e-c721-46a8-96ed-969c255a51eb-tmp\") pod \"storage-provisioner\" (UID: \"5e808c7e-c721-46a8-96ed-969c255a51eb\") " pod="kube-system/storage-provisioner"
	Nov 23 08:48:05 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:48:05.529066    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz5tk\" (UniqueName: \"kubernetes.io/projected/5e808c7e-c721-46a8-96ed-969c255a51eb-kube-api-access-wz5tk\") pod \"storage-provisioner\" (UID: \"5e808c7e-c721-46a8-96ed-969c255a51eb\") " pod="kube-system/storage-provisioner"
	Nov 23 08:48:05 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:48:05.629740    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv289\" (UniqueName: \"kubernetes.io/projected/54e1b924-5413-4e3d-ad3c-51f6af499016-kube-api-access-lv289\") pod \"coredns-66bc5c9577-qctlw\" (UID: \"54e1b924-5413-4e3d-ad3c-51f6af499016\") " pod="kube-system/coredns-66bc5c9577-qctlw"
	Nov 23 08:48:05 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:48:05.629956    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54e1b924-5413-4e3d-ad3c-51f6af499016-config-volume\") pod \"coredns-66bc5c9577-qctlw\" (UID: \"54e1b924-5413-4e3d-ad3c-51f6af499016\") " pod="kube-system/coredns-66bc5c9577-qctlw"
	Nov 23 08:48:06 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:48:06.945854    1461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.945834205 podStartE2EDuration="41.945834205s" podCreationTimestamp="2025-11-23 08:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:48:06.930407535 +0000 UTC m=+48.554225403" watchObservedRunningTime="2025-11-23 08:48:06.945834205 +0000 UTC m=+48.569652081"
	Nov 23 08:48:09 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:48:09.485928    1461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qctlw" podStartSLOduration=46.485899232 podStartE2EDuration="46.485899232s" podCreationTimestamp="2025-11-23 08:47:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:48:06.958194163 +0000 UTC m=+48.582012056" watchObservedRunningTime="2025-11-23 08:48:09.485899232 +0000 UTC m=+51.109717108"
	Nov 23 08:48:09 default-k8s-diff-port-422900 kubelet[1461]: I1123 08:48:09.670249    1461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f85ht\" (UniqueName: \"kubernetes.io/projected/92f3c4e4-b38c-4d7b-b2fb-56d47cd1c156-kube-api-access-f85ht\") pod \"busybox\" (UID: \"92f3c4e4-b38c-4d7b-b2fb-56d47cd1c156\") " pod="default/busybox"
	Nov 23 08:48:19 default-k8s-diff-port-422900 kubelet[1461]: E1123 08:48:19.716061    1461 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 192.168.85.2:44412->192.168.85.2:10010: read tcp 192.168.85.2:44412->192.168.85.2:10010: read: connection reset by peer
	
	
	==> storage-provisioner [3d82912652c69d404d4d77e211bdd74d1b0de8b7a5cec57e067c47cae6edc5a4] <==
	W1123 08:48:06.118324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:48:06.118485       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:48:06.119524       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-422900_20e27392-4975-4bdb-badb-b3a986897ab5!
	I1123 08:48:06.122436       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f83257d8-5529-4c23-8339-a6b35debddd7", APIVersion:"v1", ResourceVersion:"464", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-422900_20e27392-4975-4bdb-badb-b3a986897ab5 became leader
	W1123 08:48:06.135670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:06.145145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:48:06.219948       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-422900_20e27392-4975-4bdb-badb-b3a986897ab5!
	W1123 08:48:08.149486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:08.158944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:10.167997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:10.173106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:12.176307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:12.185755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:14.188860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:14.194396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:16.198360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:16.211028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:18.214872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:18.220840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:20.229692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:20.237369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:22.244994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:22.264365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:24.267528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:48:24.278632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-422900 -n default-k8s-diff-port-422900
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-422900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (16.36s)

                                                
                                    

Test pass (299/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.61
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.2
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.92
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 142.64
29 TestAddons/serial/Volcano 40.86
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 8.92
35 TestAddons/parallel/Registry 15.13
36 TestAddons/parallel/RegistryCreds 1
37 TestAddons/parallel/Ingress 20.04
38 TestAddons/parallel/InspektorGadget 11.97
39 TestAddons/parallel/MetricsServer 6.8
41 TestAddons/parallel/CSI 49.01
42 TestAddons/parallel/Headlamp 11.32
43 TestAddons/parallel/CloudSpanner 6.63
44 TestAddons/parallel/LocalPath 53.23
45 TestAddons/parallel/NvidiaDevicePlugin 5.55
46 TestAddons/parallel/Yakd 11.81
48 TestAddons/StoppedEnableDisable 12.31
49 TestCertOptions 37.58
50 TestCertExpiration 232.77
52 TestForceSystemdFlag 38.46
53 TestForceSystemdEnv 42.66
54 TestDockerEnvContainerd 46.08
58 TestErrorSpam/setup 31.35
59 TestErrorSpam/start 0.77
60 TestErrorSpam/status 1.24
61 TestErrorSpam/pause 1.74
62 TestErrorSpam/unpause 1.73
63 TestErrorSpam/stop 1.65
66 TestFunctional/serial/CopySyncFile 0.01
67 TestFunctional/serial/StartWithProxy 78.2
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 7.24
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.44
75 TestFunctional/serial/CacheCmd/cache/add_local 1.23
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.86
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.15
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 62.82
84 TestFunctional/serial/ComponentHealth 0.09
85 TestFunctional/serial/LogsCmd 1.46
86 TestFunctional/serial/LogsFileCmd 1.48
87 TestFunctional/serial/InvalidService 4.86
89 TestFunctional/parallel/ConfigCmd 0.42
90 TestFunctional/parallel/DashboardCmd 8.91
91 TestFunctional/parallel/DryRun 0.56
92 TestFunctional/parallel/InternationalLanguage 0.29
93 TestFunctional/parallel/StatusCmd 1.37
97 TestFunctional/parallel/ServiceCmdConnect 7.63
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 26.4
101 TestFunctional/parallel/SSHCmd 0.73
102 TestFunctional/parallel/CpCmd 2.09
104 TestFunctional/parallel/FileSync 0.28
105 TestFunctional/parallel/CertSync 1.7
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.94
113 TestFunctional/parallel/License 0.32
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
116 TestFunctional/parallel/Version/short 0.09
117 TestFunctional/parallel/Version/components 1.36
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.45
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
125 TestFunctional/parallel/ImageCommands/ImageBuild 4.29
126 TestFunctional/parallel/ImageCommands/Setup 0.62
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.36
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.21
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.42
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.67
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
143 TestFunctional/parallel/MountCmd/any-port 8.17
144 TestFunctional/parallel/MountCmd/specific-port 2.36
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.2
146 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
148 TestFunctional/parallel/ProfileCmd/profile_list 0.5
149 TestFunctional/parallel/ServiceCmd/List 0.63
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
151 TestFunctional/parallel/ServiceCmd/JSONOutput 0.65
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
153 TestFunctional/parallel/ServiceCmd/Format 0.5
154 TestFunctional/parallel/ServiceCmd/URL 0.5
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 221.18
163 TestMultiControlPlane/serial/DeployApp 7.92
164 TestMultiControlPlane/serial/PingHostFromPods 1.6
165 TestMultiControlPlane/serial/AddWorkerNode 59.91
166 TestMultiControlPlane/serial/NodeLabels 0.13
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.11
168 TestMultiControlPlane/serial/CopyFile 20.26
169 TestMultiControlPlane/serial/StopSecondaryNode 2.32
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.84
171 TestMultiControlPlane/serial/RestartSecondaryNode 13.36
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.55
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 91.27
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.51
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
176 TestMultiControlPlane/serial/StopCluster 36.63
177 TestMultiControlPlane/serial/RestartCluster 68.55
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
179 TestMultiControlPlane/serial/AddSecondaryNode 87.04
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.52
185 TestJSONOutput/start/Command 76.69
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.73
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.65
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.91
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 41.35
211 TestKicCustomNetwork/use_default_bridge_network 34.75
212 TestKicExistingNetwork 35.86
213 TestKicCustomSubnet 35.42
214 TestKicStaticIP 40.05
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 70.16
219 TestMountStart/serial/StartWithMountFirst 8.31
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 8.14
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 7.79
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 140.26
231 TestMultiNode/serial/DeployApp2Nodes 4.92
232 TestMultiNode/serial/PingHostFrom2Pods 0.97
233 TestMultiNode/serial/AddNode 58.16
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.72
236 TestMultiNode/serial/CopyFile 10.66
237 TestMultiNode/serial/StopNode 2.43
238 TestMultiNode/serial/StartAfterStop 7.96
239 TestMultiNode/serial/RestartKeepsNodes 78.67
240 TestMultiNode/serial/DeleteNode 5.68
241 TestMultiNode/serial/StopMultiNode 24.1
242 TestMultiNode/serial/RestartMultiNode 47.12
243 TestMultiNode/serial/ValidateNameConflict 34.31
248 TestPreload 120.14
250 TestScheduledStopUnix 111.44
253 TestInsufficientStorage 13.45
254 TestRunningBinaryUpgrade 69.56
256 TestKubernetesUpgrade 103.84
257 TestMissingContainerUpgrade 153.4
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 50.54
261 TestNoKubernetes/serial/StartWithStopK8s 24.71
262 TestNoKubernetes/serial/Start 7.78
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
265 TestNoKubernetes/serial/ProfileList 0.91
266 TestNoKubernetes/serial/Stop 3.28
267 TestNoKubernetes/serial/StartNoArgs 7.31
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
269 TestStoppedBinaryUpgrade/Setup 0.81
270 TestStoppedBinaryUpgrade/Upgrade 64.72
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.84
280 TestPause/serial/Start 83.81
281 TestPause/serial/SecondStartNoReconfiguration 7.26
289 TestNetworkPlugins/group/false 5.15
290 TestPause/serial/Pause 0.83
291 TestPause/serial/VerifyStatus 0.4
292 TestPause/serial/Unpause 0.88
293 TestPause/serial/PauseAgain 1.07
294 TestPause/serial/DeletePaused 3.05
298 TestPause/serial/VerifyDeletedResources 0.17
300 TestStartStop/group/old-k8s-version/serial/FirstStart 63.43
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.19
303 TestStartStop/group/old-k8s-version/serial/Stop 12.16
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
305 TestStartStop/group/old-k8s-version/serial/SecondStart 56.03
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.29
309 TestStartStop/group/no-preload/serial/FirstStart 75.35
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.38
311 TestStartStop/group/old-k8s-version/serial/Pause 3.26
313 TestStartStop/group/embed-certs/serial/FirstStart 92.2
315 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.09
316 TestStartStop/group/no-preload/serial/Stop 12.13
317 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
319 TestStartStop/group/no-preload/serial/SecondStart 54.69
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.66
321 TestStartStop/group/embed-certs/serial/Stop 12.71
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
323 TestStartStop/group/embed-certs/serial/SecondStart 56.38
324 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
325 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
326 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
327 TestStartStop/group/no-preload/serial/Pause 3.2
329 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.44
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.13
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.33
333 TestStartStop/group/embed-certs/serial/Pause 4.45
335 TestStartStop/group/newest-cni/serial/FirstStart 39.19
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.11
338 TestStartStop/group/newest-cni/serial/Stop 1.48
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
340 TestStartStop/group/newest-cni/serial/SecondStart 17.14
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
345 TestStartStop/group/newest-cni/serial/Pause 3.61
346 TestNetworkPlugins/group/auto/Start 86.94
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.53
348 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.46
349 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
350 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 68.08
351 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
352 TestNetworkPlugins/group/auto/KubeletFlags 0.36
353 TestNetworkPlugins/group/auto/NetCatPod 8.33
354 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
355 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
356 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.72
357 TestNetworkPlugins/group/auto/DNS 0.43
358 TestNetworkPlugins/group/auto/Localhost 0.27
359 TestNetworkPlugins/group/auto/HairPin 0.19
360 TestNetworkPlugins/group/kindnet/Start 85.91
361 TestNetworkPlugins/group/calico/Start 58.43
362 TestNetworkPlugins/group/calico/ControllerPod 6.01
363 TestNetworkPlugins/group/calico/KubeletFlags 0.42
364 TestNetworkPlugins/group/calico/NetCatPod 9.3
365 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
366 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
367 TestNetworkPlugins/group/kindnet/NetCatPod 9.29
368 TestNetworkPlugins/group/calico/DNS 0.23
369 TestNetworkPlugins/group/calico/Localhost 0.28
370 TestNetworkPlugins/group/calico/HairPin 0.18
371 TestNetworkPlugins/group/kindnet/DNS 0.21
372 TestNetworkPlugins/group/kindnet/Localhost 0.25
373 TestNetworkPlugins/group/kindnet/HairPin 0.2
374 TestNetworkPlugins/group/custom-flannel/Start 70.37
375 TestNetworkPlugins/group/enable-default-cni/Start 82.33
376 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
377 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.28
378 TestNetworkPlugins/group/custom-flannel/DNS 0.18
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
381 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
382 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.35
383 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
384 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
385 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
386 TestNetworkPlugins/group/flannel/Start 64.37
387 TestNetworkPlugins/group/bridge/Start 74.7
388 TestNetworkPlugins/group/flannel/ControllerPod 6
389 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
390 TestNetworkPlugins/group/flannel/NetCatPod 9.26
391 TestNetworkPlugins/group/flannel/DNS 0.19
392 TestNetworkPlugins/group/flannel/Localhost 0.14
393 TestNetworkPlugins/group/flannel/HairPin 0.16
394 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
395 TestNetworkPlugins/group/bridge/NetCatPod 10.37
396 TestNetworkPlugins/group/bridge/DNS 0.18
397 TestNetworkPlugins/group/bridge/Localhost 0.14
398 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.28.0/json-events (5.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-559259 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-559259 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.606356895s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1123 07:55:54.070299    4151 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1123 07:55:54.070374    4151 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-559259
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-559259: exit status 85 (85.792665ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-559259 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-559259 │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 07:55:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 07:55:48.504748    4156 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:55:48.504952    4156 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:55:48.504978    4156 out.go:374] Setting ErrFile to fd 2...
	I1123 07:55:48.504996    4156 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:55:48.505271    4156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
	W1123 07:55:48.505465    4156 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21966-2339/.minikube/config/config.json: open /home/jenkins/minikube-integration/21966-2339/.minikube/config/config.json: no such file or directory
	I1123 07:55:48.505920    4156 out.go:368] Setting JSON to true
	I1123 07:55:48.506697    4156 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":2297,"bootTime":1763882251,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 07:55:48.506789    4156 start.go:143] virtualization:  
	I1123 07:55:48.512403    4156 out.go:99] [download-only-559259] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 07:55:48.512644    4156 notify.go:221] Checking for updates...
	W1123 07:55:48.512599    4156 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball: no such file or directory
	I1123 07:55:48.515978    4156 out.go:171] MINIKUBE_LOCATION=21966
	I1123 07:55:48.519403    4156 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 07:55:48.522530    4156 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 07:55:48.525672    4156 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube
	I1123 07:55:48.529080    4156 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1123 07:55:48.535104    4156 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 07:55:48.535404    4156 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 07:55:48.570738    4156 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 07:55:48.570853    4156 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:55:48.969581    4156 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-23 07:55:48.960472307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 07:55:48.969680    4156 docker.go:319] overlay module found
	I1123 07:55:48.972752    4156 out.go:99] Using the docker driver based on user configuration
	I1123 07:55:48.972789    4156 start.go:309] selected driver: docker
	I1123 07:55:48.972806    4156 start.go:927] validating driver "docker" against <nil>
	I1123 07:55:48.972917    4156 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:55:49.032293    4156 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-23 07:55:49.023434525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 07:55:49.032441    4156 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 07:55:49.032736    4156 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1123 07:55:49.032907    4156 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 07:55:49.036034    4156 out.go:171] Using Docker driver with root privileges
	I1123 07:55:49.039163    4156 cni.go:84] Creating CNI manager for ""
	I1123 07:55:49.039232    4156 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 07:55:49.039247    4156 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 07:55:49.039329    4156 start.go:353] cluster config:
	{Name:download-only-559259 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-559259 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 07:55:49.042273    4156 out.go:99] Starting "download-only-559259" primary control-plane node in "download-only-559259" cluster
	I1123 07:55:49.042295    4156 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 07:55:49.045265    4156 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1123 07:55:49.045314    4156 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 07:55:49.045456    4156 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 07:55:49.061068    4156 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 07:55:49.061242    4156 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 07:55:49.061348    4156 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 07:55:49.105021    4156 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1123 07:55:49.105050    4156 cache.go:65] Caching tarball of preloaded images
	I1123 07:55:49.105224    4156 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 07:55:49.109455    4156 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1123 07:55:49.109480    4156 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1123 07:55:49.195324    4156 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1123 07:55:49.195453    4156 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1123 07:55:52.361342    4156 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1123 07:55:52.361717    4156 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/download-only-559259/config.json ...
	I1123 07:55:52.361752    4156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/download-only-559259/config.json: {Name:mk95813f88ce601080ce19178aa31e1b33a0afd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:52.361921    4156 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 07:55:52.362110    4156 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21966-2339/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-559259 host does not exist
	  To start a cluster, run: "minikube start -p download-only-559259"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-559259
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-641918 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-641918 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.203131239s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1123 07:55:58.720138    4151 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1123 07:55:58.720171    4151 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-641918
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-641918: exit status 85 (94.367403ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-559259 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-559259 │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:55 UTC │
	│ delete  │ -p download-only-559259                                                                                                                                                               │ download-only-559259 │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:55 UTC │
	│ start   │ -o=json --download-only -p download-only-641918 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-641918 │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 07:55:54
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 07:55:54.561091    4358 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:55:54.561289    4358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:55:54.561321    4358 out.go:374] Setting ErrFile to fd 2...
	I1123 07:55:54.561345    4358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:55:54.561640    4358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
	I1123 07:55:54.562063    4358 out.go:368] Setting JSON to true
	I1123 07:55:54.562800    4358 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":2304,"bootTime":1763882251,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 07:55:54.562891    4358 start.go:143] virtualization:  
	I1123 07:55:54.567566    4358 out.go:99] [download-only-641918] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 07:55:54.567867    4358 notify.go:221] Checking for updates...
	I1123 07:55:54.570721    4358 out.go:171] MINIKUBE_LOCATION=21966
	I1123 07:55:54.573603    4358 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 07:55:54.576381    4358 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 07:55:54.579305    4358 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube
	I1123 07:55:54.582201    4358 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1123 07:55:54.587945    4358 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 07:55:54.588210    4358 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 07:55:54.614352    4358 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 07:55:54.614454    4358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:55:54.679494    4358 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-23 07:55:54.67016011 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 07:55:54.679594    4358 docker.go:319] overlay module found
	I1123 07:55:54.682450    4358 out.go:99] Using the docker driver based on user configuration
	I1123 07:55:54.682482    4358 start.go:309] selected driver: docker
	I1123 07:55:54.682498    4358 start.go:927] validating driver "docker" against <nil>
	I1123 07:55:54.682604    4358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:55:54.735170    4358 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-23 07:55:54.72603954 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 07:55:54.735333    4358 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 07:55:54.735611    4358 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1123 07:55:54.735757    4358 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 07:55:54.738684    4358 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-641918 host does not exist
	  To start a cluster, run: "minikube start -p download-only-641918"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-641918
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.92s)

                                                
                                                
=== RUN   TestBinaryMirror
I1123 07:55:59.841804    4151 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-330710 --alsologtostderr --binary-mirror http://127.0.0.1:40325 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-330710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-330710
--- PASS: TestBinaryMirror (0.92s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-243441
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-243441: exit status 85 (85.631069ms)

                                                
                                                
-- stdout --
	* Profile "addons-243441" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-243441"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-243441
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-243441: exit status 85 (86.867972ms)

                                                
                                                
-- stdout --
	* Profile "addons-243441" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-243441"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (142.64s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-243441 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-243441 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m22.641602018s)
--- PASS: TestAddons/Setup (142.64s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.86s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 56.369814ms
addons_test.go:876: volcano-admission stabilized in 56.435743ms
addons_test.go:868: volcano-scheduler stabilized in 57.01761ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-xrrqq" [b48b524f-f3f4-4a34-853e-c87f98e22d21] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.005196938s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-p9r79" [575a645c-4fba-4a16-acc0-64ad56b3afae] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00312322s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-xm6t6" [faa0d587-8196-4785-aeeb-f312e8a518f2] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003618189s
addons_test.go:903: (dbg) Run:  kubectl --context addons-243441 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-243441 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-243441 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [8eccf338-8e1d-454a-872a-6a14f87e4ae2] Pending
helpers_test.go:352: "test-job-nginx-0" [8eccf338-8e1d-454a-872a-6a14f87e4ae2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [8eccf338-8e1d-454a-872a-6a14f87e4ae2] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003106651s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-243441 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-243441 addons disable volcano --alsologtostderr -v=1: (12.185763115s)
--- PASS: TestAddons/serial/Volcano (40.86s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-243441 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-243441 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.92s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-243441 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-243441 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [98488f22-4610-482a-a566-2de9dad9915d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [98488f22-4610-482a-a566-2de9dad9915d] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003068769s
addons_test.go:694: (dbg) Run:  kubectl --context addons-243441 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-243441 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-243441 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-243441 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.92s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.243252ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-ghchh" [d3e2388c-843e-4087-834b-0b59d9ccb8ef] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003656009s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-wbrgq" [19230ab8-cb7f-448b-a836-669fb0f76788] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003127719s
addons_test.go:392: (dbg) Run:  kubectl --context addons-243441 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-243441 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-243441 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.071678614s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-243441 ip
2025/11/23 07:59:36 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-243441 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.13s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (1s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.688008ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-243441
addons_test.go:332: (dbg) Run:  kubectl --context addons-243441 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-243441 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (1.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-243441 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-243441 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-243441 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [1faa4b69-f383-4561-8b8a-f40e6479955f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [1faa4b69-f383-4561-8b8a-f40e6479955f] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003331063s
I1123 08:00:57.947674    4151 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-243441 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-243441 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-243441 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-243441 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-243441 addons disable ingress-dns --alsologtostderr -v=1: (1.414919626s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-243441 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-243441 addons disable ingress --alsologtostderr -v=1: (7.86546423s)
--- PASS: TestAddons/parallel/Ingress (20.04s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.97s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-w9xg9" [44df2619-ae98-42ff-8998-a95d87afa415] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005922598s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-243441 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-243441 addons disable inspektor-gadget --alsologtostderr -v=1: (5.958255689s)
--- PASS: TestAddons/parallel/InspektorGadget (11.97s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.177041ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-42gpv" [a16f8264-c149-407e-bf45-125870d89c9b] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00310552s
addons_test.go:463: (dbg) Run:  kubectl --context addons-243441 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-243441 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.01s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1123 07:59:34.471367    4151 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1123 07:59:34.475498    4151 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1123 07:59:34.475522    4151 kapi.go:107] duration metric: took 7.369654ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.379812ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-243441 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-243441 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [69880bc1-280b-456e-8586-87e316ae6af2] Pending
helpers_test.go:352: "task-pv-pod" [69880bc1-280b-456e-8586-87e316ae6af2] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003006799s
addons_test.go:572: (dbg) Run:  kubectl --context addons-243441 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-243441 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-243441 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-243441 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-243441 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-243441 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-243441 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [e9172038-4ef8-4b21-91f8-d6390369a398] Pending
helpers_test.go:352: "task-pv-pod-restore" [e9172038-4ef8-4b21-91f8-d6390369a398] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [e9172038-4ef8-4b21-91f8-d6390369a398] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003820861s
addons_test.go:614: (dbg) Run:  kubectl --context addons-243441 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-243441 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-243441 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-243441 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-243441 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-243441 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.910860195s)
--- PASS: TestAddons/parallel/CSI (49.01s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-243441 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-243441 --alsologtostderr -v=1: (1.021107549s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-dxcq9" [8b3f36e2-3632-4d43-ab30-fd4b2a3a4d2a] Pending
helpers_test.go:352: "headlamp-dfcdc64b-dxcq9" [8b3f36e2-3632-4d43-ab30-fd4b2a3a4d2a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-dxcq9" [8b3f36e2-3632-4d43-ab30-fd4b2a3a4d2a] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-dxcq9" [8b3f36e2-3632-4d43-ab30-fd4b2a3a4d2a] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.00300242s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-243441 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (11.32s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-m4dwn" [659be633-5596-4913-9282-b2ddbd5598ad] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003291559s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-243441 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.23s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-243441 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-243441 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243441 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [27c04a7e-15a8-4566-b7b0-c604a1a81388] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [27c04a7e-15a8-4566-b7b0-c604a1a81388] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [27c04a7e-15a8-4566-b7b0-c604a1a81388] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003647746s
addons_test.go:967: (dbg) Run:  kubectl --context addons-243441 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-243441 ssh "cat /opt/local-path-provisioner/pvc-ae32394f-3d7a-46ef-a8ff-849df0807f89_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-243441 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-243441 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-243441 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-243441 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.98617115s)
--- PASS: TestAddons/parallel/LocalPath (53.23s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-5qkvs" [7112ca65-b1fe-4129-af69-2d4ea9aa2b19] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003773648s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-243441 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-db6kq" [f7efee9a-67cd-40dc-a94f-91c0c0283b49] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004009458s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-243441 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-243441 addons disable yakd --alsologtostderr -v=1: (5.800958158s)
--- PASS: TestAddons/parallel/Yakd (11.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-243441
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-243441: (12.039239894s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-243441
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-243441
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-243441
--- PASS: TestAddons/StoppedEnableDisable (12.31s)

                                                
                                    
x
+
TestCertOptions (37.58s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-106536 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E1123 08:40:40.261074    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-106536 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.659674182s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-106536 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-106536 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-106536 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-106536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-106536
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-106536: (2.184501385s)
--- PASS: TestCertOptions (37.58s)

                                                
                                    
x
+
TestCertExpiration (232.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-119748 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-119748 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (43.759264161s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-119748 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-119748 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.585390436s)
helpers_test.go:175: Cleaning up "cert-expiration-119748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-119748
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-119748: (2.418563443s)
--- PASS: TestCertExpiration (232.77s)

                                                
                                    
x
+
TestForceSystemdFlag (38.46s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-517113 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-517113 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (35.666375598s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-517113 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-517113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-517113
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-517113: (2.454416714s)
--- PASS: TestForceSystemdFlag (38.46s)

                                                
                                    
x
+
TestForceSystemdEnv (42.66s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-760522 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-760522 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.413725745s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-760522 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-760522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-760522
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-760522: (2.780805442s)
--- PASS: TestForceSystemdEnv (42.66s)

                                                
                                    
x
+
TestDockerEnvContainerd (46.08s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-982376 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-982376 --driver=docker  --container-runtime=containerd: (30.155147229s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-982376"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-982376": (1.10032674s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PWBfjJwoHdhM/agent.23825" SSH_AGENT_PID="23826" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PWBfjJwoHdhM/agent.23825" SSH_AGENT_PID="23826" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PWBfjJwoHdhM/agent.23825" SSH_AGENT_PID="23826" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.184176407s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PWBfjJwoHdhM/agent.23825" SSH_AGENT_PID="23826" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-982376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-982376
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-982376: (2.100720853s)
--- PASS: TestDockerEnvContainerd (46.08s)

                                                
                                    
x
+
TestErrorSpam/setup (31.35s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-119156 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-119156 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-119156 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-119156 --driver=docker  --container-runtime=containerd: (31.346599782s)
--- PASS: TestErrorSpam/setup (31.35s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119156 --log_dir /tmp/nospam-119156 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119156 --log_dir /tmp/nospam-119156 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119156 --log_dir /tmp/nospam-119156 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119156 --log_dir /tmp/nospam-119156 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119156 --log_dir /tmp/nospam-119156 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119156 --log_dir /tmp/nospam-119156 status
--- PASS: TestErrorSpam/status (1.24s)

                                                
                                    
x
+
TestErrorSpam/pause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119156 --log_dir /tmp/nospam-119156 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119156 --log_dir /tmp/nospam-119156 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119156 --log_dir /tmp/nospam-119156 pause
--- PASS: TestErrorSpam/pause (1.74s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119156 --log_dir /tmp/nospam-119156 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119156 --log_dir /tmp/nospam-119156 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119156 --log_dir /tmp/nospam-119156 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119156 --log_dir /tmp/nospam-119156 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-119156 --log_dir /tmp/nospam-119156 stop: (1.445583142s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119156 --log_dir /tmp/nospam-119156 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119156 --log_dir /tmp/nospam-119156 stop
--- PASS: TestErrorSpam/stop (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21966-2339/.minikube/files/etc/test/nested/copy/4151/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.2s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-638783 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1123 08:03:23.534002    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:23.540397    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:23.551738    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:23.573090    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:23.614436    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:23.695841    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:23.857438    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:24.179138    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:24.821144    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:26.102913    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:28.665571    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:33.786910    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:44.029246    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:04:04.510877    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-638783 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m18.202464964s)
--- PASS: TestFunctional/serial/StartWithProxy (78.20s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1123 08:04:13.833328    4151 config.go:182] Loaded profile config "functional-638783": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-638783 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-638783 --alsologtostderr -v=8: (7.232200795s)
functional_test.go:678: soft start took 7.235922151s for "functional-638783" cluster.
I1123 08:04:21.065860    4151 config.go:182] Loaded profile config "functional-638783": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (7.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-638783 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-638783 cache add registry.k8s.io/pause:3.1: (1.27072968s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-638783 cache add registry.k8s.io/pause:3.3: (1.128757096s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-638783 cache add registry.k8s.io/pause:latest: (1.038696255s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-638783 /tmp/TestFunctionalserialCacheCmdcacheadd_local402188357/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 cache add minikube-local-cache-test:functional-638783
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 cache delete minikube-local-cache-test:functional-638783
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-638783
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-638783 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (287.090784ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 kubectl -- --context functional-638783 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-638783 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (62.82s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-638783 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1123 08:04:45.473898    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-638783 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m2.815687574s)
functional_test.go:776: restart took 1m2.815794218s for "functional-638783" cluster.
I1123 08:05:31.371082    4151 config.go:182] Loaded profile config "functional-638783": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (62.82s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-638783 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-638783 logs: (1.456816912s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 logs --file /tmp/TestFunctionalserialLogsFileCmd3743792477/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-638783 logs --file /tmp/TestFunctionalserialLogsFileCmd3743792477/001/logs.txt: (1.482019742s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.86s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-638783 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-638783
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-638783: exit status 115 (951.520771ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32409 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-638783 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.86s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-638783 config get cpus: exit status 14 (62.714244ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-638783 config get cpus: exit status 14 (101.416484ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-638783 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-638783 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 41353: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.91s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-638783 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-638783 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (272.591722ms)

                                                
                                                
-- stdout --
	* [functional-638783] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:06:19.176355   40419 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:06:19.176631   40419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:06:19.176664   40419 out.go:374] Setting ErrFile to fd 2...
	I1123 08:06:19.176684   40419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:06:19.177025   40419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
	I1123 08:06:19.177533   40419 out.go:368] Setting JSON to false
	I1123 08:06:19.178746   40419 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":2928,"bootTime":1763882251,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 08:06:19.178855   40419 start.go:143] virtualization:  
	I1123 08:06:19.184124   40419 out.go:179] * [functional-638783] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:06:19.187177   40419 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:06:19.187257   40419 notify.go:221] Checking for updates...
	I1123 08:06:19.193058   40419 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:06:19.195871   40419 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:06:19.199168   40419 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube
	I1123 08:06:19.202279   40419 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:06:19.205218   40419 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:06:19.208660   40419 config.go:182] Loaded profile config "functional-638783": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:06:19.209335   40419 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:06:19.252712   40419 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:06:19.253003   40419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:06:19.342964   40419 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 08:06:19.333042158 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:06:19.343065   40419 docker.go:319] overlay module found
	I1123 08:06:19.346332   40419 out.go:179] * Using the docker driver based on existing profile
	I1123 08:06:19.349269   40419 start.go:309] selected driver: docker
	I1123 08:06:19.349291   40419 start.go:927] validating driver "docker" against &{Name:functional-638783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-638783 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:06:19.349388   40419 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:06:19.353017   40419 out.go:203] 
	W1123 08:06:19.355806   40419 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1123 08:06:19.358655   40419 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-638783 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-638783 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-638783 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (294.732772ms)

                                                
                                                
-- stdout --
	* [functional-638783] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:06:21.119775   41021 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:06:21.120023   41021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:06:21.120032   41021 out.go:374] Setting ErrFile to fd 2...
	I1123 08:06:21.120052   41021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:06:21.122082   41021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
	I1123 08:06:21.122718   41021 out.go:368] Setting JSON to false
	I1123 08:06:21.123892   41021 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":2930,"bootTime":1763882251,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 08:06:21.124007   41021 start.go:143] virtualization:  
	I1123 08:06:21.128966   41021 out.go:179] * [functional-638783] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1123 08:06:21.133202   41021 notify.go:221] Checking for updates...
	I1123 08:06:21.137462   41021 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:06:21.140457   41021 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:06:21.143303   41021 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:06:21.146181   41021 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube
	I1123 08:06:21.149003   41021 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:06:21.155571   41021 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:06:21.160009   41021 config.go:182] Loaded profile config "functional-638783": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:06:21.160734   41021 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:06:21.211995   41021 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:06:21.212113   41021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:06:21.295628   41021 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 08:06:21.285671661 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:06:21.295729   41021 docker.go:319] overlay module found
	I1123 08:06:21.298711   41021 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1123 08:06:21.301552   41021 start.go:309] selected driver: docker
	I1123 08:06:21.301571   41021 start.go:927] validating driver "docker" against &{Name:functional-638783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-638783 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:06:21.301664   41021 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:06:21.305350   41021 out.go:203] 
	W1123 08:06:21.309079   41021 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1123 08:06:21.312010   41021 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-638783 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-638783 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-d5678" [98d368f6-7f32-4037-b06d-7cfca7b4e75c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-d5678" [98d368f6-7f32-4037-b06d-7cfca7b4e75c] Running
E1123 08:06:07.395736    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004071823s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32598
functional_test.go:1680: http://192.168.49.2:32598: success! body:
Request served by hello-node-connect-7d85dfc575-d5678

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32598
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [2ffdfa72-bfac-4edc-8636-883f0981e0d8] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003559848s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-638783 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-638783 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-638783 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-638783 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4bf2db30-5ac7-4c0c-a24c-a8e447cb3506] Pending
helpers_test.go:352: "sp-pod" [4bf2db30-5ac7-4c0c-a24c-a8e447cb3506] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [4bf2db30-5ac7-4c0c-a24c-a8e447cb3506] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003125143s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-638783 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-638783 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-638783 delete -f testdata/storage-provisioner/pod.yaml: (1.402601595s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-638783 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [84ab4ae6-e8f0-4afa-b690-7a51f782b5ad] Pending
helpers_test.go:352: "sp-pod" [84ab4ae6-e8f0-4afa-b690-7a51f782b5ad] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [84ab4ae6-e8f0-4afa-b690-7a51f782b5ad] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003694184s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-638783 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.40s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh -n functional-638783 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 cp functional-638783:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd970888580/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh -n functional-638783 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh -n functional-638783 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4151/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "sudo cat /etc/test/nested/copy/4151/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4151.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "sudo cat /etc/ssl/certs/4151.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4151.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "sudo cat /usr/share/ca-certificates/4151.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41512.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "sudo cat /etc/ssl/certs/41512.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41512.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "sudo cat /usr/share/ca-certificates/41512.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-638783 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-638783 ssh "sudo systemctl is-active docker": exit status 1 (504.782141ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-638783 ssh "sudo systemctl is-active crio": exit status 1 (436.238672ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-638783 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-638783 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-638783 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 35678: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-638783 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-638783 version -o=json --components: (1.356093607s)
--- PASS: TestFunctional/parallel/Version/components (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-638783 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-638783 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [33e32fc6-6987-4e3f-a7f3-51f18be2cc4f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [33e32fc6-6987-4e3f-a7f3-51f18be2cc4f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003907488s
I1123 08:05:49.707752    4151 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-638783 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-638783
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-638783
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-638783 image ls --format short --alsologtostderr:
I1123 08:06:22.839367   41339 out.go:360] Setting OutFile to fd 1 ...
I1123 08:06:22.839574   41339 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:06:22.839609   41339 out.go:374] Setting ErrFile to fd 2...
I1123 08:06:22.839631   41339 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:06:22.839966   41339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
I1123 08:06:22.840649   41339 config.go:182] Loaded profile config "functional-638783": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:06:22.840826   41339 config.go:182] Loaded profile config "functional-638783": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:06:22.841385   41339 cli_runner.go:164] Run: docker container inspect functional-638783 --format={{.State.Status}}
I1123 08:06:22.868036   41339 ssh_runner.go:195] Run: systemctl --version
I1123 08:06:22.868087   41339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-638783
I1123 08:06:22.892798   41339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/functional-638783/id_rsa Username:docker}
I1123 08:06:23.016788   41339 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-638783 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server               │ functional-638783  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:ce2d2c │ 2.17MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ localhost/my-image                          │ functional-638783  │ sha256:a5c0f5 │ 831kB  │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:05baa9 │ 22.8MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:7eb2c6 │ 20.7MB │
│ docker.io/library/minikube-local-cache-test │ functional-638783  │ sha256:9cc18a │ 991B   │
│ docker.io/library/nginx                     │ alpine             │ sha256:cbad63 │ 23.1MB │
│ docker.io/library/nginx                     │ latest             │ sha256:bb747c │ 58.3MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:a18947 │ 98.2MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:43911e │ 24.6MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:b5f57e │ 15.8MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-638783 image ls --format table --alsologtostderr:
I1123 08:06:27.960253   41898 out.go:360] Setting OutFile to fd 1 ...
I1123 08:06:27.960477   41898 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:06:27.960506   41898 out.go:374] Setting ErrFile to fd 2...
I1123 08:06:27.960525   41898 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:06:27.960819   41898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
I1123 08:06:27.961568   41898 config.go:182] Loaded profile config "functional-638783": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:06:27.961758   41898 config.go:182] Loaded profile config "functional-638783": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:06:27.962308   41898 cli_runner.go:164] Run: docker container inspect functional-638783 --format={{.State.Status}}
I1123 08:06:27.981321   41898 ssh_runner.go:195] Run: systemctl --version
I1123 08:06:27.981380   41898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-638783
I1123 08:06:27.999642   41898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/functional-638783/id_rsa Username:docker}
I1123 08:06:28.116220   41898 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-638783 image ls --format json --alsologtostderr:
[{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"si
ze":"23117513"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"20720058"},{"id":"sha256:9cc18a42bd73324d6f9098e8bb4102885a33cacfc4face907cd9740910cee524","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:
functional-638783"],"size":"991"},{"id":"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"98207481"},{"id":"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"15779817"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-638783","docker.io/kicbase/echo-server:latest"],"si
ze":"2173567"},{"id":"sha256:a5c0f54f37952922a011092fb891b15ce218fb29274d8b95bdc180090c5cf601","repoDigests":[],"repoTags":["localhost/my-image:functional-638783"],"size":"830617"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"58263548"},{"id":"sha256:1611cd07b61d57dbbfebe6db
242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"24571109"},{"id":"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"22788047"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-638783 image ls --format json --alsologtostderr:
I1123 08:06:27.694907   41856 out.go:360] Setting OutFile to fd 1 ...
I1123 08:06:27.695126   41856 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:06:27.695153   41856 out.go:374] Setting ErrFile to fd 2...
I1123 08:06:27.695170   41856 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:06:27.695448   41856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
I1123 08:06:27.696067   41856 config.go:182] Loaded profile config "functional-638783": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:06:27.696233   41856 config.go:182] Loaded profile config "functional-638783": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:06:27.696801   41856 cli_runner.go:164] Run: docker container inspect functional-638783 --format={{.State.Status}}
I1123 08:06:27.715092   41856 ssh_runner.go:195] Run: systemctl --version
I1123 08:06:27.715161   41856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-638783
I1123 08:06:27.734955   41856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/functional-638783/id_rsa Username:docker}
I1123 08:06:27.844378   41856 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-638783 image ls --format yaml --alsologtostderr:
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:9cc18a42bd73324d6f9098e8bb4102885a33cacfc4face907cd9740910cee524
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-638783
size: "991"
- id: sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "58263548"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "24571109"
- id: sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "22788047"
- id: sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "15779817"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "23117513"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "98207481"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-638783
- docker.io/kicbase/echo-server:latest
size: "2173567"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "20720058"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-638783 image ls --format yaml --alsologtostderr:
I1123 08:06:23.123588   41385 out.go:360] Setting OutFile to fd 1 ...
I1123 08:06:23.123705   41385 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:06:23.123711   41385 out.go:374] Setting ErrFile to fd 2...
I1123 08:06:23.123715   41385 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:06:23.124078   41385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
I1123 08:06:23.124974   41385 config.go:182] Loaded profile config "functional-638783": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:06:23.125109   41385 config.go:182] Loaded profile config "functional-638783": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:06:23.126968   41385 cli_runner.go:164] Run: docker container inspect functional-638783 --format={{.State.Status}}
I1123 08:06:23.155558   41385 ssh_runner.go:195] Run: systemctl --version
I1123 08:06:23.155608   41385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-638783
I1123 08:06:23.180394   41385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/functional-638783/id_rsa Username:docker}
I1123 08:06:23.289280   41385 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-638783 ssh pgrep buildkitd: exit status 1 (337.275286ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 image build -t localhost/my-image:functional-638783 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-638783 image build -t localhost/my-image:functional-638783 testdata/build --alsologtostderr: (3.667749927s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-638783 image build -t localhost/my-image:functional-638783 testdata/build --alsologtostderr:
I1123 08:06:23.719700   41622 out.go:360] Setting OutFile to fd 1 ...
I1123 08:06:23.720262   41622 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:06:23.720275   41622 out.go:374] Setting ErrFile to fd 2...
I1123 08:06:23.720282   41622 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:06:23.720559   41622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
I1123 08:06:23.721274   41622 config.go:182] Loaded profile config "functional-638783": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:06:23.724900   41622 config.go:182] Loaded profile config "functional-638783": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:06:23.725565   41622 cli_runner.go:164] Run: docker container inspect functional-638783 --format={{.State.Status}}
I1123 08:06:23.745667   41622 ssh_runner.go:195] Run: systemctl --version
I1123 08:06:23.745724   41622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-638783
I1123 08:06:23.768274   41622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/functional-638783/id_rsa Username:docker}
I1123 08:06:23.896114   41622 build_images.go:162] Building image from path: /tmp/build.356881154.tar
I1123 08:06:23.896251   41622 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1123 08:06:23.904922   41622 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.356881154.tar
I1123 08:06:23.909165   41622 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.356881154.tar: stat -c "%s %y" /var/lib/minikube/build/build.356881154.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.356881154.tar': No such file or directory
I1123 08:06:23.909194   41622 ssh_runner.go:362] scp /tmp/build.356881154.tar --> /var/lib/minikube/build/build.356881154.tar (3072 bytes)
I1123 08:06:23.931792   41622 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.356881154
I1123 08:06:23.939562   41622 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.356881154 -xf /var/lib/minikube/build/build.356881154.tar
I1123 08:06:23.948320   41622 containerd.go:394] Building image: /var/lib/minikube/build/build.356881154
I1123 08:06:23.948463   41622 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.356881154 --local dockerfile=/var/lib/minikube/build/build.356881154 --output type=image,name=localhost/my-image:functional-638783
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:e8c607a2f9ac131a2636b3d091fc9ad149bc25da05481d213d6f01c2a021bee8
#8 exporting manifest sha256:e8c607a2f9ac131a2636b3d091fc9ad149bc25da05481d213d6f01c2a021bee8 0.0s done
#8 exporting config sha256:a5c0f54f37952922a011092fb891b15ce218fb29274d8b95bdc180090c5cf601 0.0s done
#8 naming to localhost/my-image:functional-638783 done
#8 DONE 0.2s
I1123 08:06:27.297183   41622 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.356881154 --local dockerfile=/var/lib/minikube/build/build.356881154 --output type=image,name=localhost/my-image:functional-638783: (3.348675206s)
I1123 08:06:27.297245   41622 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.356881154
I1123 08:06:27.312398   41622 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.356881154.tar
I1123 08:06:27.330786   41622 build_images.go:218] Built localhost/my-image:functional-638783 from /tmp/build.356881154.tar
I1123 08:06:27.330831   41622 build_images.go:134] succeeded building to: functional-638783
I1123 08:06:27.330836   41622 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-638783
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 image load --daemon kicbase/echo-server:functional-638783 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-638783 image load --daemon kicbase/echo-server:functional-638783 --alsologtostderr: (1.057063161s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 image load --daemon kicbase/echo-server:functional-638783 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-638783
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 image load --daemon kicbase/echo-server:functional-638783 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 image save kicbase/echo-server:functional-638783 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 image rm kicbase/echo-server:functional-638783 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-638783
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 image save --daemon kicbase/echo-server:functional-638783 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-638783
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 update-context --alsologtostderr -v=2
2025/11/23 08:06:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-638783 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.36.165 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-638783 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-638783 /tmp/TestFunctionalparallelMountCmdany-port1336628278/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763885150671749024" to /tmp/TestFunctionalparallelMountCmdany-port1336628278/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763885150671749024" to /tmp/TestFunctionalparallelMountCmdany-port1336628278/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763885150671749024" to /tmp/TestFunctionalparallelMountCmdany-port1336628278/001/test-1763885150671749024
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-638783 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (469.022312ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:05:51.144287    4151 retry.go:31] will retry after 356.349317ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 23 08:05 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 23 08:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 23 08:05 test-1763885150671749024
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh cat /mount-9p/test-1763885150671749024
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-638783 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [810df636-7ace-4fe3-beee-a09605fd579f] Pending
helpers_test.go:352: "busybox-mount" [810df636-7ace-4fe3-beee-a09605fd579f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [810df636-7ace-4fe3-beee-a09605fd579f] Running
helpers_test.go:352: "busybox-mount" [810df636-7ace-4fe3-beee-a09605fd579f] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [810df636-7ace-4fe3-beee-a09605fd579f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004000949s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-638783 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-638783 /tmp/TestFunctionalparallelMountCmdany-port1336628278/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-638783 /tmp/TestFunctionalparallelMountCmdspecific-port1722392159/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-638783 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (574.071079ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:05:59.410409    4151 retry.go:31] will retry after 435.200511ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-638783 /tmp/TestFunctionalparallelMountCmdspecific-port1722392159/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-638783 ssh "sudo umount -f /mount-9p": exit status 1 (285.421764ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-638783 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-638783 /tmp/TestFunctionalparallelMountCmdspecific-port1722392159/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-638783 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2913371333/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-638783 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2913371333/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-638783 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2913371333/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-638783 ssh "findmnt -T" /mount1: exit status 1 (634.805356ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:06:01.836709    4151 retry.go:31] will retry after 640.718274ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-638783 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-638783 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2913371333/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-638783 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2913371333/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-638783 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2913371333/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-638783 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-638783 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-f5ldx" [301218ae-230a-4d94-95e9-fe701b986043] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-f5ldx" [301218ae-230a-4d94-95e9-fe701b986043] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003816934s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "406.253461ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "91.990514ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "430.237906ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "64.576486ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 service list -o json
functional_test.go:1504: Took "653.412236ms" to run "out/minikube-linux-arm64 -p functional-638783 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30880
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-638783 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30880
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-638783
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-638783
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-638783
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (221.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1123 08:08:23.522927    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:08:51.237279    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-630436 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (3m40.287322665s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (221.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-630436 kubectl -- rollout status deployment/busybox: (4.946899526s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 kubectl -- exec busybox-7b57f96db7-4f9f7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 kubectl -- exec busybox-7b57f96db7-gkpv4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 kubectl -- exec busybox-7b57f96db7-xl4ph -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 kubectl -- exec busybox-7b57f96db7-4f9f7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 kubectl -- exec busybox-7b57f96db7-gkpv4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 kubectl -- exec busybox-7b57f96db7-xl4ph -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 kubectl -- exec busybox-7b57f96db7-4f9f7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 kubectl -- exec busybox-7b57f96db7-gkpv4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 kubectl -- exec busybox-7b57f96db7-xl4ph -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 kubectl -- exec busybox-7b57f96db7-4f9f7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 kubectl -- exec busybox-7b57f96db7-4f9f7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 kubectl -- exec busybox-7b57f96db7-gkpv4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 kubectl -- exec busybox-7b57f96db7-gkpv4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 kubectl -- exec busybox-7b57f96db7-xl4ph -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 kubectl -- exec busybox-7b57f96db7-xl4ph -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 node add --alsologtostderr -v 5
E1123 08:10:40.260085    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:10:40.266962    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:10:40.278694    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:10:40.300064    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:10:40.341542    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:10:40.423191    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:10:40.585737    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:10:40.907379    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:10:41.549388    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:10:42.831365    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:10:45.392729    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:10:50.514999    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:11:00.756658    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:11:21.237952    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-630436 node add --alsologtostderr -v 5: (58.83137377s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-630436 status --alsologtostderr -v 5: (1.075588518s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-630436 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.108163878s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-630436 status --output json --alsologtostderr -v 5: (1.046952671s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 cp testdata/cp-test.txt ha-630436:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 cp ha-630436:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile520128815/001/cp-test_ha-630436.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 cp ha-630436:/home/docker/cp-test.txt ha-630436-m02:/home/docker/cp-test_ha-630436_ha-630436-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m02 "sudo cat /home/docker/cp-test_ha-630436_ha-630436-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 cp ha-630436:/home/docker/cp-test.txt ha-630436-m03:/home/docker/cp-test_ha-630436_ha-630436-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m03 "sudo cat /home/docker/cp-test_ha-630436_ha-630436-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 cp ha-630436:/home/docker/cp-test.txt ha-630436-m04:/home/docker/cp-test_ha-630436_ha-630436-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m04 "sudo cat /home/docker/cp-test_ha-630436_ha-630436-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 cp testdata/cp-test.txt ha-630436-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 cp ha-630436-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile520128815/001/cp-test_ha-630436-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 cp ha-630436-m02:/home/docker/cp-test.txt ha-630436:/home/docker/cp-test_ha-630436-m02_ha-630436.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436 "sudo cat /home/docker/cp-test_ha-630436-m02_ha-630436.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 cp ha-630436-m02:/home/docker/cp-test.txt ha-630436-m03:/home/docker/cp-test_ha-630436-m02_ha-630436-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m03 "sudo cat /home/docker/cp-test_ha-630436-m02_ha-630436-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 cp ha-630436-m02:/home/docker/cp-test.txt ha-630436-m04:/home/docker/cp-test_ha-630436-m02_ha-630436-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m04 "sudo cat /home/docker/cp-test_ha-630436-m02_ha-630436-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 cp testdata/cp-test.txt ha-630436-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 cp ha-630436-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile520128815/001/cp-test_ha-630436-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 cp ha-630436-m03:/home/docker/cp-test.txt ha-630436:/home/docker/cp-test_ha-630436-m03_ha-630436.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436 "sudo cat /home/docker/cp-test_ha-630436-m03_ha-630436.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 cp ha-630436-m03:/home/docker/cp-test.txt ha-630436-m02:/home/docker/cp-test_ha-630436-m03_ha-630436-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m02 "sudo cat /home/docker/cp-test_ha-630436-m03_ha-630436-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 cp ha-630436-m03:/home/docker/cp-test.txt ha-630436-m04:/home/docker/cp-test_ha-630436-m03_ha-630436-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m04 "sudo cat /home/docker/cp-test_ha-630436-m03_ha-630436-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 cp testdata/cp-test.txt ha-630436-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 cp ha-630436-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile520128815/001/cp-test_ha-630436-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 cp ha-630436-m04:/home/docker/cp-test.txt ha-630436:/home/docker/cp-test_ha-630436-m04_ha-630436.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436 "sudo cat /home/docker/cp-test_ha-630436-m04_ha-630436.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 cp ha-630436-m04:/home/docker/cp-test.txt ha-630436-m02:/home/docker/cp-test_ha-630436-m04_ha-630436-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m02 "sudo cat /home/docker/cp-test_ha-630436-m04_ha-630436-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 cp ha-630436-m04:/home/docker/cp-test.txt ha-630436-m03:/home/docker/cp-test_ha-630436-m04_ha-630436-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 ssh -n ha-630436-m03 "sudo cat /home/docker/cp-test_ha-630436-m04_ha-630436-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (2.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-630436 node stop m02 --alsologtostderr -v 5: (1.498220342s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-630436 status --alsologtostderr -v 5: exit status 7 (821.465037ms)

                                                
                                                
-- stdout --
	ha-630436
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-630436-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-630436-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-630436-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:11:46.622944   58093 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:11:46.623183   58093 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:11:46.623198   58093 out.go:374] Setting ErrFile to fd 2...
	I1123 08:11:46.623204   58093 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:11:46.623496   58093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
	I1123 08:11:46.623716   58093 out.go:368] Setting JSON to false
	I1123 08:11:46.623753   58093 mustload.go:66] Loading cluster: ha-630436
	I1123 08:11:46.623860   58093 notify.go:221] Checking for updates...
	I1123 08:11:46.624156   58093 config.go:182] Loaded profile config "ha-630436": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:11:46.624174   58093 status.go:174] checking status of ha-630436 ...
	I1123 08:11:46.624998   58093 cli_runner.go:164] Run: docker container inspect ha-630436 --format={{.State.Status}}
	I1123 08:11:46.646119   58093 status.go:371] ha-630436 host status = "Running" (err=<nil>)
	I1123 08:11:46.646141   58093 host.go:66] Checking if "ha-630436" exists ...
	I1123 08:11:46.646436   58093 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-630436
	I1123 08:11:46.687274   58093 host.go:66] Checking if "ha-630436" exists ...
	I1123 08:11:46.687587   58093 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:11:46.687624   58093 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-630436
	I1123 08:11:46.709964   58093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/ha-630436/id_rsa Username:docker}
	I1123 08:11:46.822921   58093 ssh_runner.go:195] Run: systemctl --version
	I1123 08:11:46.829225   58093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:11:46.842158   58093 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:11:46.899106   58093 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-23 08:11:46.888707255 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:11:46.905908   58093 kubeconfig.go:125] found "ha-630436" server: "https://192.168.49.254:8443"
	I1123 08:11:46.905954   58093 api_server.go:166] Checking apiserver status ...
	I1123 08:11:46.906013   58093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:11:46.927377   58093 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1478/cgroup
	I1123 08:11:46.936764   58093 api_server.go:182] apiserver freezer: "10:freezer:/docker/052f010937b766698c87607e539999e1cace30d41bb7c9289101de871d21fde7/kubepods/burstable/pod7de7028217fca0acdb2ca6b24d97504e/66ede355a962d87ab00b794a3e052fa583acc0855d024992f7eb0e8be759fd7f"
	I1123 08:11:46.936862   58093 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/052f010937b766698c87607e539999e1cace30d41bb7c9289101de871d21fde7/kubepods/burstable/pod7de7028217fca0acdb2ca6b24d97504e/66ede355a962d87ab00b794a3e052fa583acc0855d024992f7eb0e8be759fd7f/freezer.state
	I1123 08:11:46.946636   58093 api_server.go:204] freezer state: "THAWED"
	I1123 08:11:46.946666   58093 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 08:11:46.957855   58093 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 08:11:46.957887   58093 status.go:463] ha-630436 apiserver status = Running (err=<nil>)
	I1123 08:11:46.957897   58093 status.go:176] ha-630436 status: &{Name:ha-630436 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:11:46.957919   58093 status.go:174] checking status of ha-630436-m02 ...
	I1123 08:11:46.958239   58093 cli_runner.go:164] Run: docker container inspect ha-630436-m02 --format={{.State.Status}}
	I1123 08:11:46.977180   58093 status.go:371] ha-630436-m02 host status = "Stopped" (err=<nil>)
	I1123 08:11:46.977204   58093 status.go:384] host is not running, skipping remaining checks
	I1123 08:11:46.977212   58093 status.go:176] ha-630436-m02 status: &{Name:ha-630436-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:11:46.977240   58093 status.go:174] checking status of ha-630436-m03 ...
	I1123 08:11:46.977649   58093 cli_runner.go:164] Run: docker container inspect ha-630436-m03 --format={{.State.Status}}
	I1123 08:11:46.996023   58093 status.go:371] ha-630436-m03 host status = "Running" (err=<nil>)
	I1123 08:11:46.996047   58093 host.go:66] Checking if "ha-630436-m03" exists ...
	I1123 08:11:46.996367   58093 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-630436-m03
	I1123 08:11:47.017073   58093 host.go:66] Checking if "ha-630436-m03" exists ...
	I1123 08:11:47.017375   58093 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:11:47.017462   58093 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-630436-m03
	I1123 08:11:47.035336   58093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/ha-630436-m03/id_rsa Username:docker}
	I1123 08:11:47.143217   58093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:11:47.162633   58093 kubeconfig.go:125] found "ha-630436" server: "https://192.168.49.254:8443"
	I1123 08:11:47.162661   58093 api_server.go:166] Checking apiserver status ...
	I1123 08:11:47.162715   58093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:11:47.176476   58093 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1422/cgroup
	I1123 08:11:47.192427   58093 api_server.go:182] apiserver freezer: "10:freezer:/docker/52e07ba3551198e722b34ac2bed5419dd1444cef5200736b85bc5b8264ec1b19/kubepods/burstable/pod5ef5c1f159714500dab23411bb2edba4/57cf78d226d100679fd768e88966859eca09b9d5b587dc6a27ca88bcb95c6b96"
	I1123 08:11:47.192531   58093 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/52e07ba3551198e722b34ac2bed5419dd1444cef5200736b85bc5b8264ec1b19/kubepods/burstable/pod5ef5c1f159714500dab23411bb2edba4/57cf78d226d100679fd768e88966859eca09b9d5b587dc6a27ca88bcb95c6b96/freezer.state
	I1123 08:11:47.200309   58093 api_server.go:204] freezer state: "THAWED"
	I1123 08:11:47.200338   58093 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 08:11:47.208611   58093 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 08:11:47.208638   58093 status.go:463] ha-630436-m03 apiserver status = Running (err=<nil>)
	I1123 08:11:47.208648   58093 status.go:176] ha-630436-m03 status: &{Name:ha-630436-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:11:47.208681   58093 status.go:174] checking status of ha-630436-m04 ...
	I1123 08:11:47.209028   58093 cli_runner.go:164] Run: docker container inspect ha-630436-m04 --format={{.State.Status}}
	I1123 08:11:47.229063   58093 status.go:371] ha-630436-m04 host status = "Running" (err=<nil>)
	I1123 08:11:47.229090   58093 host.go:66] Checking if "ha-630436-m04" exists ...
	I1123 08:11:47.229380   58093 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-630436-m04
	I1123 08:11:47.250602   58093 host.go:66] Checking if "ha-630436-m04" exists ...
	I1123 08:11:47.250918   58093 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:11:47.250962   58093 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-630436-m04
	I1123 08:11:47.269044   58093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/ha-630436-m04/id_rsa Username:docker}
	I1123 08:11:47.370919   58093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:11:47.384024   58093 status.go:176] ha-630436-m04 status: &{Name:ha-630436-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (2.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-630436 node start m02 --alsologtostderr -v 5: (11.756009061s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-630436 status --alsologtostderr -v 5: (1.484698085s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1123 08:12:02.199545    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.551188576s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (91.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-630436 stop --alsologtostderr -v 5: (27.979827793s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 start --wait true --alsologtostderr -v 5
E1123 08:13:23.522483    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:13:24.121158    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-630436 start --wait true --alsologtostderr -v 5: (1m3.105730648s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (91.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-630436 node delete m03 --alsologtostderr -v 5: (10.380462123s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-630436 stop --alsologtostderr -v 5: (36.514130478s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-630436 status --alsologtostderr -v 5: exit status 7 (111.359515ms)

                                                
                                                
-- stdout --
	ha-630436
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-630436-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-630436-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:14:23.257364   72992 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:14:23.257510   72992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:14:23.257522   72992 out.go:374] Setting ErrFile to fd 2...
	I1123 08:14:23.257527   72992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:14:23.257814   72992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
	I1123 08:14:23.257992   72992 out.go:368] Setting JSON to false
	I1123 08:14:23.258034   72992 mustload.go:66] Loading cluster: ha-630436
	I1123 08:14:23.258110   72992 notify.go:221] Checking for updates...
	I1123 08:14:23.259325   72992 config.go:182] Loaded profile config "ha-630436": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:14:23.259348   72992 status.go:174] checking status of ha-630436 ...
	I1123 08:14:23.260059   72992 cli_runner.go:164] Run: docker container inspect ha-630436 --format={{.State.Status}}
	I1123 08:14:23.277317   72992 status.go:371] ha-630436 host status = "Stopped" (err=<nil>)
	I1123 08:14:23.277340   72992 status.go:384] host is not running, skipping remaining checks
	I1123 08:14:23.277347   72992 status.go:176] ha-630436 status: &{Name:ha-630436 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:14:23.277374   72992 status.go:174] checking status of ha-630436-m02 ...
	I1123 08:14:23.277737   72992 cli_runner.go:164] Run: docker container inspect ha-630436-m02 --format={{.State.Status}}
	I1123 08:14:23.299662   72992 status.go:371] ha-630436-m02 host status = "Stopped" (err=<nil>)
	I1123 08:14:23.299687   72992 status.go:384] host is not running, skipping remaining checks
	I1123 08:14:23.299695   72992 status.go:176] ha-630436-m02 status: &{Name:ha-630436-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:14:23.299714   72992 status.go:174] checking status of ha-630436-m04 ...
	I1123 08:14:23.300011   72992 cli_runner.go:164] Run: docker container inspect ha-630436-m04 --format={{.State.Status}}
	I1123 08:14:23.319911   72992 status.go:371] ha-630436-m04 host status = "Stopped" (err=<nil>)
	I1123 08:14:23.319933   72992 status.go:384] host is not running, skipping remaining checks
	I1123 08:14:23.319941   72992 status.go:176] ha-630436-m04 status: &{Name:ha-630436-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (68.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-630436 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m7.544654881s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (68.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (87.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 node add --control-plane --alsologtostderr -v 5
E1123 08:15:40.260582    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:16:07.965600    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-630436 node add --control-plane --alsologtostderr -v 5: (1m25.962389827s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-630436 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-630436 status --alsologtostderr -v 5: (1.073137732s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (87.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.517622912s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (76.69s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-298229 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E1123 08:18:23.523866    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-298229 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m16.690674515s)
--- PASS: TestJSONOutput/start/Command (76.69s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-298229 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-298229 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-298229 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-298229 --output=json --user=testUser: (5.912897529s)
--- PASS: TestJSONOutput/stop/Command (5.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-199585 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-199585 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (88.906614ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9001ac55-1401-47e5-ad32-3ca234a41af6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-199585] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"73be9a47-71c9-433b-a5f8-6389fb9d1e92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21966"}}
	{"specversion":"1.0","id":"e9ced76d-1536-4b33-b14d-8beeb667efd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"254759ef-7dd3-40d4-9c08-974111f21583","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig"}}
	{"specversion":"1.0","id":"6f2c53e1-61ff-418e-8469-ceb5715b4fd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube"}}
	{"specversion":"1.0","id":"1153731f-f6d0-45d1-a8bf-9a56f252d80c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e777b56c-6b6d-494c-9a6f-5dcae6c9b868","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"eccdb11d-a4d3-498d-8995-2ec1ea1e01c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-199585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-199585
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.35s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-914777 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-914777 --network=: (39.183044561s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-914777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-914777
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-914777: (2.142508455s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.35s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.75s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-639304 --network=bridge
E1123 08:19:46.599857    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-639304 --network=bridge: (32.567295604s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-639304" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-639304
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-639304: (2.152229354s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.75s)

                                                
                                    
x
+
TestKicExistingNetwork (35.86s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1123 08:19:55.683910    4151 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1123 08:19:55.700299    4151 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1123 08:19:55.700378    4151 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1123 08:19:55.700400    4151 cli_runner.go:164] Run: docker network inspect existing-network
W1123 08:19:55.718681    4151 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1123 08:19:55.718714    4151 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1123 08:19:55.718730    4151 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1123 08:19:55.718832    4151 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1123 08:19:55.738784    4151 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a946cc9c0edf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:ea:52:17:a9:7a} reservation:<nil>}
I1123 08:19:55.739140    4151 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017de0a0}
I1123 08:19:55.739165    4151 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1123 08:19:55.739219    4151 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1123 08:19:55.800055    4151 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-078357 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-078357 --network=existing-network: (33.568763829s)
helpers_test.go:175: Cleaning up "existing-network-078357" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-078357
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-078357: (2.138554727s)
I1123 08:20:31.523760    4151 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.86s)

                                                
                                    
x
+
TestKicCustomSubnet (35.42s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-236739 --subnet=192.168.60.0/24
E1123 08:20:40.261640    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-236739 --subnet=192.168.60.0/24: (33.138859042s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-236739 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-236739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-236739
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-236739: (2.255694931s)
--- PASS: TestKicCustomSubnet (35.42s)

                                                
                                    
x
+
TestKicStaticIP (40.05s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-649350 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-649350 --static-ip=192.168.200.200: (37.606101586s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-649350 ip
helpers_test.go:175: Cleaning up "static-ip-649350" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-649350
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-649350: (2.286018817s)
--- PASS: TestKicStaticIP (40.05s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (70.16s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-035335 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-035335 --driver=docker  --container-runtime=containerd: (32.115892182s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-038108 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-038108 --driver=docker  --container-runtime=containerd: (32.076146042s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-035335
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-038108
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-038108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-038108
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-038108: (2.217552848s)
helpers_test.go:175: Cleaning up "first-035335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-035335
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-035335: (2.340937841s)
--- PASS: TestMinikubeProfile (70.16s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-979897 --memory=3072 --mount-string /tmp/TestMountStartserial4060485712/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-979897 --memory=3072 --mount-string /tmp/TestMountStartserial4060485712/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.305606739s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-979897 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-981604 --memory=3072 --mount-string /tmp/TestMountStartserial4060485712/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-981604 --memory=3072 --mount-string /tmp/TestMountStartserial4060485712/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.141999109s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-981604 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-979897 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-979897 --alsologtostderr -v=5: (1.705092131s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-981604 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-981604
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-981604: (1.28897215s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.79s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-981604
E1123 08:23:23.522682    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-981604: (6.793733195s)
--- PASS: TestMountStart/serial/RestartStopped (7.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-981604 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (140.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-012322 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1123 08:25:40.260339    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-012322 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m19.725020917s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (140.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-012322 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-012322 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-012322 -- rollout status deployment/busybox: (3.065793824s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-012322 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-012322 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-012322 -- exec busybox-7b57f96db7-mx8rn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-012322 -- exec busybox-7b57f96db7-qxkvp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-012322 -- exec busybox-7b57f96db7-mx8rn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-012322 -- exec busybox-7b57f96db7-qxkvp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-012322 -- exec busybox-7b57f96db7-mx8rn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-012322 -- exec busybox-7b57f96db7-qxkvp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.92s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-012322 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-012322 -- exec busybox-7b57f96db7-mx8rn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-012322 -- exec busybox-7b57f96db7-mx8rn -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-012322 -- exec busybox-7b57f96db7-qxkvp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-012322 -- exec busybox-7b57f96db7-qxkvp -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-012322 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-012322 -v=5 --alsologtostderr: (57.444487s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.16s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-012322 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 cp testdata/cp-test.txt multinode-012322:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 ssh -n multinode-012322 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 cp multinode-012322:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile859032105/001/cp-test_multinode-012322.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 ssh -n multinode-012322 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 cp multinode-012322:/home/docker/cp-test.txt multinode-012322-m02:/home/docker/cp-test_multinode-012322_multinode-012322-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 ssh -n multinode-012322 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 ssh -n multinode-012322-m02 "sudo cat /home/docker/cp-test_multinode-012322_multinode-012322-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 cp multinode-012322:/home/docker/cp-test.txt multinode-012322-m03:/home/docker/cp-test_multinode-012322_multinode-012322-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 ssh -n multinode-012322 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 ssh -n multinode-012322-m03 "sudo cat /home/docker/cp-test_multinode-012322_multinode-012322-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 cp testdata/cp-test.txt multinode-012322-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 ssh -n multinode-012322-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 cp multinode-012322-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile859032105/001/cp-test_multinode-012322-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 ssh -n multinode-012322-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 cp multinode-012322-m02:/home/docker/cp-test.txt multinode-012322:/home/docker/cp-test_multinode-012322-m02_multinode-012322.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 ssh -n multinode-012322-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 ssh -n multinode-012322 "sudo cat /home/docker/cp-test_multinode-012322-m02_multinode-012322.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 cp multinode-012322-m02:/home/docker/cp-test.txt multinode-012322-m03:/home/docker/cp-test_multinode-012322-m02_multinode-012322-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 ssh -n multinode-012322-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 ssh -n multinode-012322-m03 "sudo cat /home/docker/cp-test_multinode-012322-m02_multinode-012322-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 cp testdata/cp-test.txt multinode-012322-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 ssh -n multinode-012322-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 cp multinode-012322-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile859032105/001/cp-test_multinode-012322-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 ssh -n multinode-012322-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 cp multinode-012322-m03:/home/docker/cp-test.txt multinode-012322:/home/docker/cp-test_multinode-012322-m03_multinode-012322.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 ssh -n multinode-012322-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 ssh -n multinode-012322 "sudo cat /home/docker/cp-test_multinode-012322-m03_multinode-012322.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 cp multinode-012322-m03:/home/docker/cp-test.txt multinode-012322-m02:/home/docker/cp-test_multinode-012322-m03_multinode-012322-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 ssh -n multinode-012322-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 ssh -n multinode-012322-m02 "sudo cat /home/docker/cp-test_multinode-012322-m03_multinode-012322-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 node stop m03
E1123 08:27:03.327371    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-012322 node stop m03: (1.306246578s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-012322 status: exit status 7 (569.276091ms)

                                                
                                                
-- stdout --
	multinode-012322
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-012322-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-012322-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-012322 status --alsologtostderr: exit status 7 (550.589924ms)

                                                
                                                
-- stdout --
	multinode-012322
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-012322-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-012322-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:27:05.162554  126269 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:27:05.163015  126269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:27:05.163039  126269 out.go:374] Setting ErrFile to fd 2...
	I1123 08:27:05.163057  126269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:27:05.163351  126269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
	I1123 08:27:05.163564  126269 out.go:368] Setting JSON to false
	I1123 08:27:05.163621  126269 mustload.go:66] Loading cluster: multinode-012322
	I1123 08:27:05.163698  126269 notify.go:221] Checking for updates...
	I1123 08:27:05.164604  126269 config.go:182] Loaded profile config "multinode-012322": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:27:05.164653  126269 status.go:174] checking status of multinode-012322 ...
	I1123 08:27:05.165173  126269 cli_runner.go:164] Run: docker container inspect multinode-012322 --format={{.State.Status}}
	I1123 08:27:05.186069  126269 status.go:371] multinode-012322 host status = "Running" (err=<nil>)
	I1123 08:27:05.186089  126269 host.go:66] Checking if "multinode-012322" exists ...
	I1123 08:27:05.186380  126269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-012322
	I1123 08:27:05.213913  126269 host.go:66] Checking if "multinode-012322" exists ...
	I1123 08:27:05.214258  126269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:27:05.214296  126269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-012322
	I1123 08:27:05.235805  126269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/multinode-012322/id_rsa Username:docker}
	I1123 08:27:05.339655  126269 ssh_runner.go:195] Run: systemctl --version
	I1123 08:27:05.346341  126269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:27:05.359315  126269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:27:05.418870  126269 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 08:27:05.409051988 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:27:05.419407  126269 kubeconfig.go:125] found "multinode-012322" server: "https://192.168.67.2:8443"
	I1123 08:27:05.419453  126269 api_server.go:166] Checking apiserver status ...
	I1123 08:27:05.419498  126269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:27:05.432541  126269 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup
	I1123 08:27:05.441160  126269 api_server.go:182] apiserver freezer: "10:freezer:/docker/a9b1939c15aaa7255f43584313f4bf2baf3c35b30917fa3e801be0e9e6398665/kubepods/burstable/podf4bace7bc206373681f21bfa2f321cad/eaccbf623de1e548721e368e006e59ef1250d4d255d830528aa010aeb65e6106"
	I1123 08:27:05.441241  126269 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a9b1939c15aaa7255f43584313f4bf2baf3c35b30917fa3e801be0e9e6398665/kubepods/burstable/podf4bace7bc206373681f21bfa2f321cad/eaccbf623de1e548721e368e006e59ef1250d4d255d830528aa010aeb65e6106/freezer.state
	I1123 08:27:05.448730  126269 api_server.go:204] freezer state: "THAWED"
	I1123 08:27:05.448761  126269 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1123 08:27:05.457220  126269 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1123 08:27:05.457248  126269 status.go:463] multinode-012322 apiserver status = Running (err=<nil>)
	I1123 08:27:05.457267  126269 status.go:176] multinode-012322 status: &{Name:multinode-012322 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:27:05.457284  126269 status.go:174] checking status of multinode-012322-m02 ...
	I1123 08:27:05.457778  126269 cli_runner.go:164] Run: docker container inspect multinode-012322-m02 --format={{.State.Status}}
	I1123 08:27:05.475331  126269 status.go:371] multinode-012322-m02 host status = "Running" (err=<nil>)
	I1123 08:27:05.475358  126269 host.go:66] Checking if "multinode-012322-m02" exists ...
	I1123 08:27:05.475649  126269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-012322-m02
	I1123 08:27:05.492994  126269 host.go:66] Checking if "multinode-012322-m02" exists ...
	I1123 08:27:05.493298  126269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:27:05.493344  126269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-012322-m02
	I1123 08:27:05.516890  126269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21966-2339/.minikube/machines/multinode-012322-m02/id_rsa Username:docker}
	I1123 08:27:05.623130  126269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:27:05.636739  126269 status.go:176] multinode-012322-m02 status: &{Name:multinode-012322-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:27:05.636774  126269 status.go:174] checking status of multinode-012322-m03 ...
	I1123 08:27:05.637088  126269 cli_runner.go:164] Run: docker container inspect multinode-012322-m03 --format={{.State.Status}}
	I1123 08:27:05.654969  126269 status.go:371] multinode-012322-m03 host status = "Stopped" (err=<nil>)
	I1123 08:27:05.654992  126269 status.go:384] host is not running, skipping remaining checks
	I1123 08:27:05.654999  126269 status.go:176] multinode-012322-m03 status: &{Name:multinode-012322-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.43s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-012322 node start m03 -v=5 --alsologtostderr: (7.146422929s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-012322
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-012322
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-012322: (25.234912256s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-012322 --wait=true -v=5 --alsologtostderr
E1123 08:28:23.523304    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-012322 --wait=true -v=5 --alsologtostderr: (53.318714845s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-012322
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.67s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-012322 node delete m03: (4.940386414s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-012322 stop: (23.918106446s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-012322 status: exit status 7 (92.878291ms)

                                                
                                                
-- stdout --
	multinode-012322
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-012322-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-012322 status --alsologtostderr: exit status 7 (91.040336ms)

                                                
                                                
-- stdout --
	multinode-012322
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-012322-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:29:02.030071  135021 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:29:02.030184  135021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:29:02.030194  135021 out.go:374] Setting ErrFile to fd 2...
	I1123 08:29:02.030200  135021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:29:02.030470  135021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
	I1123 08:29:02.030656  135021 out.go:368] Setting JSON to false
	I1123 08:29:02.030691  135021 mustload.go:66] Loading cluster: multinode-012322
	I1123 08:29:02.030800  135021 notify.go:221] Checking for updates...
	I1123 08:29:02.031126  135021 config.go:182] Loaded profile config "multinode-012322": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:29:02.031138  135021 status.go:174] checking status of multinode-012322 ...
	I1123 08:29:02.031982  135021 cli_runner.go:164] Run: docker container inspect multinode-012322 --format={{.State.Status}}
	I1123 08:29:02.048985  135021 status.go:371] multinode-012322 host status = "Stopped" (err=<nil>)
	I1123 08:29:02.049009  135021 status.go:384] host is not running, skipping remaining checks
	I1123 08:29:02.049018  135021 status.go:176] multinode-012322 status: &{Name:multinode-012322 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:29:02.049054  135021 status.go:174] checking status of multinode-012322-m02 ...
	I1123 08:29:02.049374  135021 cli_runner.go:164] Run: docker container inspect multinode-012322-m02 --format={{.State.Status}}
	I1123 08:29:02.067202  135021 status.go:371] multinode-012322-m02 host status = "Stopped" (err=<nil>)
	I1123 08:29:02.067227  135021 status.go:384] host is not running, skipping remaining checks
	I1123 08:29:02.067235  135021 status.go:176] multinode-012322-m02 status: &{Name:multinode-012322-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-012322 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-012322 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (46.400884647s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-012322 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.12s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-012322
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-012322-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-012322-m02 --driver=docker  --container-runtime=containerd: exit status 14 (91.27625ms)

                                                
                                                
-- stdout --
	* [multinode-012322-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-012322-m02' is duplicated with machine name 'multinode-012322-m02' in profile 'multinode-012322'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-012322-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-012322-m03 --driver=docker  --container-runtime=containerd: (31.776105779s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-012322
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-012322: exit status 80 (332.954267ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-012322 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-012322-m03 already exists in multinode-012322-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-012322-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-012322-m03: (2.060697354s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.31s)

                                                
                                    
x
+
TestPreload (120.14s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-672178 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E1123 08:30:40.260179    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-672178 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (56.997808381s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-672178 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-672178 image pull gcr.io/k8s-minikube/busybox: (2.587917317s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-672178
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-672178: (5.8598713s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-672178 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-672178 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (51.990290366s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-672178 image list
helpers_test.go:175: Cleaning up "test-preload-672178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-672178
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-672178: (2.454334954s)
--- PASS: TestPreload (120.14s)

                                                
                                    
x
+
TestScheduledStopUnix (111.44s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-650772 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-650772 --memory=3072 --driver=docker  --container-runtime=containerd: (34.285135828s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-650772 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:33:02.158301  150839 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:33:02.158562  150839 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:33:02.158657  150839 out.go:374] Setting ErrFile to fd 2...
	I1123 08:33:02.158709  150839 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:33:02.159070  150839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
	I1123 08:33:02.159349  150839 out.go:368] Setting JSON to false
	I1123 08:33:02.159554  150839 mustload.go:66] Loading cluster: scheduled-stop-650772
	I1123 08:33:02.160042  150839 config.go:182] Loaded profile config "scheduled-stop-650772": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:33:02.160208  150839 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/scheduled-stop-650772/config.json ...
	I1123 08:33:02.160455  150839 mustload.go:66] Loading cluster: scheduled-stop-650772
	I1123 08:33:02.160634  150839 config.go:182] Loaded profile config "scheduled-stop-650772": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-650772 -n scheduled-stop-650772
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-650772 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:33:02.656878  150929 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:33:02.657159  150929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:33:02.657171  150929 out.go:374] Setting ErrFile to fd 2...
	I1123 08:33:02.657182  150929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:33:02.657559  150929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
	I1123 08:33:02.657865  150929 out.go:368] Setting JSON to false
	I1123 08:33:02.658203  150929 daemonize_unix.go:73] killing process 150855 as it is an old scheduled stop
	I1123 08:33:02.661319  150929 mustload.go:66] Loading cluster: scheduled-stop-650772
	I1123 08:33:02.661763  150929 config.go:182] Loaded profile config "scheduled-stop-650772": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:33:02.661849  150929 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/scheduled-stop-650772/config.json ...
	I1123 08:33:02.662061  150929 mustload.go:66] Loading cluster: scheduled-stop-650772
	I1123 08:33:02.662185  150929 config.go:182] Loaded profile config "scheduled-stop-650772": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1123 08:33:02.666460    4151 retry.go:31] will retry after 86.711µs: open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/scheduled-stop-650772/pid: no such file or directory
I1123 08:33:02.667152    4151 retry.go:31] will retry after 151.627µs: open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/scheduled-stop-650772/pid: no such file or directory
I1123 08:33:02.668272    4151 retry.go:31] will retry after 178.626µs: open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/scheduled-stop-650772/pid: no such file or directory
I1123 08:33:02.669349    4151 retry.go:31] will retry after 474.921µs: open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/scheduled-stop-650772/pid: no such file or directory
I1123 08:33:02.670482    4151 retry.go:31] will retry after 389.69µs: open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/scheduled-stop-650772/pid: no such file or directory
I1123 08:33:02.671607    4151 retry.go:31] will retry after 940.642µs: open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/scheduled-stop-650772/pid: no such file or directory
I1123 08:33:02.673539    4151 retry.go:31] will retry after 961.166µs: open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/scheduled-stop-650772/pid: no such file or directory
I1123 08:33:02.674673    4151 retry.go:31] will retry after 1.467878ms: open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/scheduled-stop-650772/pid: no such file or directory
I1123 08:33:02.676811    4151 retry.go:31] will retry after 1.300354ms: open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/scheduled-stop-650772/pid: no such file or directory
I1123 08:33:02.680024    4151 retry.go:31] will retry after 2.78087ms: open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/scheduled-stop-650772/pid: no such file or directory
I1123 08:33:02.683241    4151 retry.go:31] will retry after 4.788403ms: open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/scheduled-stop-650772/pid: no such file or directory
I1123 08:33:02.688486    4151 retry.go:31] will retry after 12.939505ms: open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/scheduled-stop-650772/pid: no such file or directory
I1123 08:33:02.701707    4151 retry.go:31] will retry after 18.025678ms: open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/scheduled-stop-650772/pid: no such file or directory
I1123 08:33:02.719882    4151 retry.go:31] will retry after 19.737577ms: open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/scheduled-stop-650772/pid: no such file or directory
I1123 08:33:02.740127    4151 retry.go:31] will retry after 17.336367ms: open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/scheduled-stop-650772/pid: no such file or directory
I1123 08:33:02.758351    4151 retry.go:31] will retry after 64.411207ms: open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/scheduled-stop-650772/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-650772 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1123 08:33:23.523010    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-650772 -n scheduled-stop-650772
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-650772
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-650772 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:33:28.624460  151606 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:33:28.624628  151606 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:33:28.624638  151606 out.go:374] Setting ErrFile to fd 2...
	I1123 08:33:28.624644  151606 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:33:28.624995  151606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
	I1123 08:33:28.625259  151606 out.go:368] Setting JSON to false
	I1123 08:33:28.625378  151606 mustload.go:66] Loading cluster: scheduled-stop-650772
	I1123 08:33:28.625775  151606 config.go:182] Loaded profile config "scheduled-stop-650772": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:33:28.625858  151606 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/scheduled-stop-650772/config.json ...
	I1123 08:33:28.626132  151606 mustload.go:66] Loading cluster: scheduled-stop-650772
	I1123 08:33:28.626289  151606 config.go:182] Loaded profile config "scheduled-stop-650772": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-650772
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-650772: exit status 7 (67.403636ms)

                                                
                                                
-- stdout --
	scheduled-stop-650772
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-650772 -n scheduled-stop-650772
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-650772 -n scheduled-stop-650772: exit status 7 (69.820164ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-650772" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-650772
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-650772: (5.495971959s)
--- PASS: TestScheduledStopUnix (111.44s)

                                                
                                    
x
+
TestInsufficientStorage (13.45s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-659121 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-659121 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.856134952s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1c741c62-357e-4993-906b-4e24259ba5bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-659121] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c29cfd49-48c8-4c74-a388-3ad4cfdabf31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21966"}}
	{"specversion":"1.0","id":"bcd19655-7ec7-46bf-ad2d-e791b8036109","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7754cfc2-1390-42a2-a1b2-72ed01efa26d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig"}}
	{"specversion":"1.0","id":"c5d5d6f4-da08-40c5-824b-3ee8fce9282d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube"}}
	{"specversion":"1.0","id":"022c9d91-ae6b-4838-bc15-5a41eae333b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3f429bb7-05e8-4721-a2bc-3a1572408bca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d6464b77-1aad-4ad7-84c5-39bb66c61e49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1f01da62-a461-4261-983d-b6d10be7fd2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a94ecd15-ef8a-481c-b893-a4768c1fc067","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f69326a-c039-4753-813e-94b57bc88bdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2adba50b-f025-4704-a19c-2ac84b067d22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-659121\" primary control-plane node in \"insufficient-storage-659121\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c6af1c83-7c7f-43c4-9272-317fe9e1435d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"50280935-5f06-4496-a28b-9e0073e8203f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1604cd2e-926b-4043-a0bb-6a2ba344d89d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-659121 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-659121 --output=json --layout=cluster: exit status 7 (309.422814ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-659121","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-659121","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 08:34:30.416436  153445 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-659121" does not appear in /home/jenkins/minikube-integration/21966-2339/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-659121 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-659121 --output=json --layout=cluster: exit status 7 (314.138073ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-659121","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-659121","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 08:34:30.728105  153511 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-659121" does not appear in /home/jenkins/minikube-integration/21966-2339/kubeconfig
	E1123 08:34:30.737805  153511 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/insufficient-storage-659121/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-659121" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-659121
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-659121: (1.966868093s)
--- PASS: TestInsufficientStorage (13.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.56s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2313486493 start -p running-upgrade-623697 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2313486493 start -p running-upgrade-623697 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (39.381924692s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-623697 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-623697 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (27.300424441s)
helpers_test.go:175: Cleaning up "running-upgrade-623697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-623697
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-623697: (1.982006064s)
--- PASS: TestRunningBinaryUpgrade (69.56s)

                                                
                                    
x
+
TestKubernetesUpgrade (103.84s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-802321 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1123 08:36:26.601932    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-802321 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.235889274s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-802321
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-802321: (1.321648481s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-802321 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-802321 status --format={{.Host}}: exit status 7 (67.638123ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-802321 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-802321 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (44.499201335s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-802321 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-802321 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-802321 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (120.593845ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-802321] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-802321
	    minikube start -p kubernetes-upgrade-802321 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8023212 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-802321 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-802321 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-802321 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (18.458502106s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-802321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-802321
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-802321: (3.026209645s)
--- PASS: TestKubernetesUpgrade (103.84s)

                                                
                                    
x
+
TestMissingContainerUpgrade (153.4s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2472216949 start -p missing-upgrade-145056 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2472216949 start -p missing-upgrade-145056 --memory=3072 --driver=docker  --container-runtime=containerd: (1m0.120893867s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-145056
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-145056
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-145056 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1123 08:35:40.260255    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-145056 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m29.306501527s)
helpers_test.go:175: Cleaning up "missing-upgrade-145056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-145056
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-145056: (2.287110038s)
--- PASS: TestMissingContainerUpgrade (153.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-428910 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-428910 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (93.410889ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-428910] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (50.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-428910 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-428910 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (49.884524327s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-428910 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (50.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (24.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-428910 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-428910 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (22.321793343s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-428910 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-428910 status -o json: exit status 2 (310.059174ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-428910","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-428910
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-428910: (2.075750372s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (24.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-428910 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-428910 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.781557628s)
--- PASS: TestNoKubernetes/serial/Start (7.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21966-2339/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-428910 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-428910 "sudo systemctl is-active --quiet service kubelet": exit status 1 (361.175463ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-428910
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-428910: (3.277484938s)
--- PASS: TestNoKubernetes/serial/Stop (3.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-428910 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-428910 --driver=docker  --container-runtime=containerd: (7.305362158s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-428910 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-428910 "sudo systemctl is-active --quiet service kubelet": exit status 1 (353.941168ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (64.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.733028585 start -p stopped-upgrade-238717 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.733028585 start -p stopped-upgrade-238717 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (42.347327725s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.733028585 -p stopped-upgrade-238717 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.733028585 -p stopped-upgrade-238717 stop: (1.349253209s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-238717 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-238717 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (21.02408742s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (64.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-238717
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-238717: (1.840313517s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.84s)

                                                
                                    
x
+
TestPause/serial/Start (83.81s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-889231 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1123 08:38:23.522613    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-889231 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m23.813762024s)
--- PASS: TestPause/serial/Start (83.81s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.26s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-889231 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-889231 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.234035043s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-440243 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-440243 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (240.919665ms)

                                                
                                                
-- stdout --
	* [false-440243] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:39:46.385244  186983 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:39:46.385362  186983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:39:46.385374  186983 out.go:374] Setting ErrFile to fd 2...
	I1123 08:39:46.385380  186983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:39:46.385719  186983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-2339/.minikube/bin
	I1123 08:39:46.386287  186983 out.go:368] Setting JSON to false
	I1123 08:39:46.387462  186983 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4935,"bootTime":1763882251,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 08:39:46.387561  186983 start.go:143] virtualization:  
	I1123 08:39:46.393182  186983 out.go:179] * [false-440243] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:39:46.396415  186983 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:39:46.396502  186983 notify.go:221] Checking for updates...
	I1123 08:39:46.402634  186983 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:39:46.405580  186983 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-2339/kubeconfig
	I1123 08:39:46.408584  186983 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-2339/.minikube
	I1123 08:39:46.411500  186983 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:39:46.414388  186983 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:39:46.418952  186983 config.go:182] Loaded profile config "pause-889231": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:39:46.419090  186983 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:39:46.455045  186983 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:39:46.455195  186983 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:39:46.557101  186983 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 08:39:46.547355223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:39:46.557192  186983 docker.go:319] overlay module found
	I1123 08:39:46.560287  186983 out.go:179] * Using the docker driver based on user configuration
	I1123 08:39:46.563146  186983 start.go:309] selected driver: docker
	I1123 08:39:46.563164  186983 start.go:927] validating driver "docker" against <nil>
	I1123 08:39:46.563177  186983 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:39:46.566690  186983 out.go:203] 
	W1123 08:39:46.569609  186983 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1123 08:39:46.572423  186983 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-440243 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-440243

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-440243

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-440243

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-440243

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-440243

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-440243

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-440243

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-440243

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-440243

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-440243

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-440243

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-440243" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-440243" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:39:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-889231
contexts:
- context:
cluster: pause-889231
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:39:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-889231
name: pause-889231
current-context: pause-889231
kind: Config
preferences: {}
users:
- name: pause-889231
user:
client-certificate: /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/pause-889231/client.crt
client-key: /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/pause-889231/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-440243

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440243"

                                                
                                                
----------------------- debugLogs end: false-440243 [took: 4.685462531s] --------------------------------
helpers_test.go:175: Cleaning up "false-440243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-440243
--- PASS: TestNetworkPlugins/group/false (5.15s)

                                                
                                    
x
+
TestPause/serial/Pause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-889231 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.83s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-889231 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-889231 --output=json --layout=cluster: exit status 2 (399.296389ms)

                                                
                                                
-- stdout --
	{"Name":"pause-889231","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-889231","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-889231 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.88s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.07s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-889231 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-889231 --alsologtostderr -v=5: (1.072944085s)
--- PASS: TestPause/serial/PauseAgain (1.07s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.05s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-889231 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-889231 --alsologtostderr -v=5: (3.052408706s)
--- PASS: TestPause/serial/DeletePaused (3.05s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-889231
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-889231: exit status 1 (21.657558ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-889231: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (63.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-180638 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-180638 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m3.428150514s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (63.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-180638 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-180638 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.074039069s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-180638 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-180638 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-180638 --alsologtostderr -v=3: (12.154315144s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-180638 -n old-k8s-version-180638
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-180638 -n old-k8s-version-180638: exit status 7 (85.895214ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-180638 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (56.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-180638 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1123 08:43:23.522465    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-180638 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (55.579089541s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-180638 -n old-k8s-version-180638
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (56.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2thzl" [e2aad41b-3911-4b9b-92db-b66442dc63f8] Running
E1123 08:43:43.329957    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003010781s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2thzl" [e2aad41b-3911-4b9b-92db-b66442dc63f8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00596323s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-180638 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (75.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-596617 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-596617 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m15.354653145s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (75.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-180638 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-180638 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-180638 -n old-k8s-version-180638
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-180638 -n old-k8s-version-180638: exit status 2 (434.47569ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-180638 -n old-k8s-version-180638
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-180638 -n old-k8s-version-180638: exit status 2 (325.468028ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-180638 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-180638 -n old-k8s-version-180638
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-180638 -n old-k8s-version-180638
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (92.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-230843 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-230843 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m32.19773255s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (92.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-596617 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-596617 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-596617 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-596617 --alsologtostderr -v=3: (12.130638552s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-596617 -n no-preload-596617
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-596617 -n no-preload-596617: exit status 7 (75.339554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-596617 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (54.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-596617 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-596617 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (54.31507274s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-596617 -n no-preload-596617
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (54.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-230843 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-230843 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.445843008s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-230843 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-230843 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-230843 --alsologtostderr -v=3: (12.713935233s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-230843 -n embed-certs-230843
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-230843 -n embed-certs-230843: exit status 7 (102.371303ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-230843 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (56.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-230843 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-230843 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (55.960661707s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-230843 -n embed-certs-230843
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (56.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hcptl" [8255efac-3b84-44da-a560-9f811fa44b53] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002766909s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hcptl" [8255efac-3b84-44da-a560-9f811fa44b53] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003682086s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-596617 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-596617 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-596617 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-596617 -n no-preload-596617
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-596617 -n no-preload-596617: exit status 2 (350.523087ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-596617 -n no-preload-596617
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-596617 -n no-preload-596617: exit status 2 (366.693442ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-596617 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-596617 -n no-preload-596617
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-596617 -n no-preload-596617
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-422900 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-422900 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m25.440963005s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-52927" [443a8ab4-391f-47e2-abad-33585dafe738] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.011609299s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-52927" [443a8ab4-391f-47e2-abad-33585dafe738] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002972201s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-230843 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-230843 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-230843 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-230843 --alsologtostderr -v=1: (1.236082166s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-230843 -n embed-certs-230843
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-230843 -n embed-certs-230843: exit status 2 (513.635867ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-230843 -n embed-certs-230843
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-230843 -n embed-certs-230843: exit status 2 (470.60818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-230843 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-230843 --alsologtostderr -v=1: (1.033909496s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-230843 -n embed-certs-230843
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-230843 -n embed-certs-230843
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-009152 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1123 08:47:19.709134    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:47:22.270438    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:47:27.392482    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:47:37.634439    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-009152 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (39.191087576s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-009152 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1123 08:47:58.116543    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-009152 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.105229938s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-009152 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-009152 --alsologtostderr -v=3: (1.480500187s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-009152 -n newest-cni-009152
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-009152 -n newest-cni-009152: exit status 7 (105.211674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-009152 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-009152 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-009152 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (16.751858887s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-009152 -n newest-cni-009152
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-009152 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-009152 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-009152 -n newest-cni-009152
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-009152 -n newest-cni-009152: exit status 2 (337.685654ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-009152 -n newest-cni-009152
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-009152 -n newest-cni-009152: exit status 2 (333.263062ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-009152 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-009152 --alsologtostderr -v=1: (1.046180242s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-009152 -n newest-cni-009152
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-009152 -n newest-cni-009152
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-440243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-440243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m26.937710859s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-422900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-422900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.3876525s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-422900 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-422900 --alsologtostderr -v=3
E1123 08:48:39.077865    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-422900 --alsologtostderr -v=3: (12.459674356s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-422900 -n default-k8s-diff-port-422900
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-422900 -n default-k8s-diff-port-422900: exit status 7 (115.299921ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-422900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (68.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-422900 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-422900 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m7.649467725s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-422900 -n default-k8s-diff-port-422900
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (68.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fls7k" [b9e1adfc-7f8b-4ba6-b5fa-8814e5375892] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003416805s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-440243 "pgrep -a kubelet"
I1123 08:49:51.778252    4151 config.go:182] Loaded profile config "auto-440243": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-440243 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qqkzp" [46d0925f-22d0-4668-9cb9-c466be9a4fbd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qqkzp" [46d0925f-22d0-4668-9cb9-c466be9a4fbd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.043512976s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fls7k" [b9e1adfc-7f8b-4ba6-b5fa-8814e5375892] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003746497s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-422900 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-422900 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-422900 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-422900 -n default-k8s-diff-port-422900
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-422900 -n default-k8s-diff-port-422900: exit status 2 (734.575149ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-422900 -n default-k8s-diff-port-422900
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-422900 -n default-k8s-diff-port-422900: exit status 2 (492.791258ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-422900 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-422900 -n default-k8s-diff-port-422900
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-422900 -n default-k8s-diff-port-422900
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.72s)
E1123 08:55:32.596611    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:55:33.016850    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/auto-440243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-440243 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-440243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-440243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-440243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1123 08:50:06.182684    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:50:07.464613    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:50:10.026306    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:50:15.148022    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-440243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m25.910827516s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (58.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-440243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1123 08:50:40.260367    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/functional-638783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:50:45.871207    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-440243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (58.433923122s)
--- PASS: TestNetworkPlugins/group/calico/Start (58.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-5jlkc" [2e3195a0-b00e-418a-943e-219877e89f40] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1123 08:51:26.833307    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "calico-node-5jlkc" [2e3195a0-b00e-418a-943e-219877e89f40] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004229483s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-440243 "pgrep -a kubelet"
I1123 08:51:30.303421    4151 config.go:182] Loaded profile config "calico-440243": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-440243 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rqdrr" [b3932aa0-a1cc-44ad-bdb8-8bd93bbc1c29] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rqdrr" [b3932aa0-a1cc-44ad-bdb8-8bd93bbc1c29] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003994777s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-2lhvf" [e80794a3-715e-469b-9a38-c3de2321eeaf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003411779s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-440243 "pgrep -a kubelet"
I1123 08:51:38.325162    4151 config.go:182] Loaded profile config "kindnet-440243": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-440243 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gjs2x" [9b9bab07-b8c4-468b-8f31-bc8926f966ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gjs2x" [9b9bab07-b8c4-468b-8f31-bc8926f966ae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004512026s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-440243 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-440243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-440243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-440243 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-440243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-440243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-440243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-440243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m10.365822318s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-440243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1123 08:52:17.133910    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:52:44.840909    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/old-k8s-version-180638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:52:48.755264    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:53:06.603266    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:53:09.456174    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/default-k8s-diff-port-422900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:53:09.463431    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/default-k8s-diff-port-422900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:53:09.474848    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/default-k8s-diff-port-422900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:53:09.496343    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/default-k8s-diff-port-422900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:53:09.537682    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/default-k8s-diff-port-422900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:53:09.619001    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/default-k8s-diff-port-422900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:53:09.780454    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/default-k8s-diff-port-422900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:53:10.101985    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/default-k8s-diff-port-422900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:53:10.744139    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/default-k8s-diff-port-422900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:53:12.025992    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/default-k8s-diff-port-422900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:53:14.587323    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/default-k8s-diff-port-422900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-440243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m22.330448434s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (82.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-440243 "pgrep -a kubelet"
I1123 08:53:17.058456    4151 config.go:182] Loaded profile config "custom-flannel-440243": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-440243 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8q9c8" [4daa3245-7130-4f1f-b59c-f7fb8585d527] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1123 08:53:19.709373    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/default-k8s-diff-port-422900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-8q9c8" [4daa3245-7130-4f1f-b59c-f7fb8585d527] Running
E1123 08:53:23.522949    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/addons-243441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.003726844s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-440243 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-440243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-440243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-440243 "pgrep -a kubelet"
I1123 08:53:37.541270    4151 config.go:182] Loaded profile config "enable-default-cni-440243": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-440243 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tbrkf" [97f909a5-62a0-4912-a779-be62fbe80999] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tbrkf" [97f909a5-62a0-4912-a779-be62fbe80999] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.005388157s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-440243 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-440243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-440243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-440243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1123 08:53:50.432823    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/default-k8s-diff-port-422900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-440243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m4.366885376s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-440243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1123 08:54:31.395223    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/default-k8s-diff-port-422900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:54:52.039722    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/auto-440243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:54:52.046182    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/auto-440243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:54:52.057680    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/auto-440243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:54:52.079076    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/auto-440243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:54:52.120519    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/auto-440243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:54:52.202018    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/auto-440243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:54:52.363875    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/auto-440243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:54:52.685863    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/auto-440243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-440243 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m14.704618444s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-l8l9h" [6dd032dd-7e64-4df0-abf9-e03ca73cd74c] Running
E1123 08:54:53.328034    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/auto-440243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:54:54.609393    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/auto-440243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:54:57.171655    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/auto-440243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003577813s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-440243 "pgrep -a kubelet"
I1123 08:54:59.325510    4151 config.go:182] Loaded profile config "flannel-440243": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-440243 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-d46ml" [43a3ea30-3c1a-4269-90b5-3864864295c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1123 08:55:02.293292    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/auto-440243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-d46ml" [43a3ea30-3c1a-4269-90b5-3864864295c0] Running
E1123 08:55:04.891705    4151 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/no-preload-596617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003241772s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-440243 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-440243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-440243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-440243 "pgrep -a kubelet"
I1123 08:55:26.537616    4151 config.go:182] Loaded profile config "bridge-440243": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-440243 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5q4zc" [f69013fc-732e-4998-bae0-28863e810203] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5q4zc" [f69013fc-732e-4998-bae0-28863e810203] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003507443s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-440243 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-440243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-440243 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (30/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-737626 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-737626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-737626
--- SKIP: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-142181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-142181
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-440243 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-440243

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-440243

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-440243

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-440243

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-440243

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-440243

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-440243

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-440243

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-440243

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-440243

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-440243

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-440243" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-440243" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-2339/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:39:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-889231
contexts:
- context:
cluster: pause-889231
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:39:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-889231
name: pause-889231
current-context: pause-889231
kind: Config
preferences: {}
users:
- name: pause-889231
user:
client-certificate: /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/pause-889231/client.crt
client-key: /home/jenkins/minikube-integration/21966-2339/.minikube/profiles/pause-889231/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-440243

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440243"

                                                
                                                
----------------------- debugLogs end: kubenet-440243 [took: 4.425080132s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-440243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-440243
--- SKIP: TestNetworkPlugins/group/kubenet (4.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-440243 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-440243

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-440243

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-440243

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-440243

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-440243

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-440243

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-440243

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-440243

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-440243

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-440243

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-440243

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-440243" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-440243

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-440243

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-440243

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-440243

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-440243" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-440243" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-440243

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-440243" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440243"

                                                
                                                
----------------------- debugLogs end: cilium-440243 [took: 4.908212519s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-440243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-440243
--- SKIP: TestNetworkPlugins/group/cilium (5.08s)

                                                
                                    
Copied to clipboard