Test Report: Docker_Linux_containerd_arm64 21918

                    
                      08454a179ffa60c8ae500105aac58654b5cdef58:2025-11-19:42399
                    
                

Test fail (4/333)

Order failed test Duration
301 TestStartStop/group/old-k8s-version/serial/DeployApp 13.86
314 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 13.14
315 TestStartStop/group/embed-certs/serial/DeployApp 12.9
341 TestStartStop/group/no-preload/serial/DeployApp 14.94
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (13.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-264160 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2af6deb4-937f-4b9b-9de6-995e75a080b8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2af6deb4-937f-4b9b-9de6-995e75a080b8] Running
E1119 22:36:48.008664    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003496923s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-264160 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-264160
helpers_test.go:243: (dbg) docker inspect old-k8s-version-264160:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49717cdd4541256c61f8dce96738708ef0a5263ed6216dabb995ea611616d37a",
	        "Created": "2025-11-19T22:35:36.829393211Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 205037,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:35:36.889026709Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/49717cdd4541256c61f8dce96738708ef0a5263ed6216dabb995ea611616d37a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49717cdd4541256c61f8dce96738708ef0a5263ed6216dabb995ea611616d37a/hostname",
	        "HostsPath": "/var/lib/docker/containers/49717cdd4541256c61f8dce96738708ef0a5263ed6216dabb995ea611616d37a/hosts",
	        "LogPath": "/var/lib/docker/containers/49717cdd4541256c61f8dce96738708ef0a5263ed6216dabb995ea611616d37a/49717cdd4541256c61f8dce96738708ef0a5263ed6216dabb995ea611616d37a-json.log",
	        "Name": "/old-k8s-version-264160",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-264160:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-264160",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49717cdd4541256c61f8dce96738708ef0a5263ed6216dabb995ea611616d37a",
	                "LowerDir": "/var/lib/docker/overlay2/feff7a4e723e18389dcb4a6f7e089bff4aeb566c5b553ed60b078e825f1fd0a8-init/diff:/var/lib/docker/overlay2/b6ebc9601ea0ae08484f263713f3358dd93f7748ebfafbd9155229908dee9606/diff",
	                "MergedDir": "/var/lib/docker/overlay2/feff7a4e723e18389dcb4a6f7e089bff4aeb566c5b553ed60b078e825f1fd0a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/feff7a4e723e18389dcb4a6f7e089bff4aeb566c5b553ed60b078e825f1fd0a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/feff7a4e723e18389dcb4a6f7e089bff4aeb566c5b553ed60b078e825f1fd0a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-264160",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-264160/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-264160",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-264160",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-264160",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c6d7c0f5ea4187c0bdb74e6f6190f3c956a222d61984cbd94ed19e45025d4c9",
	            "SandboxKey": "/var/run/docker/netns/1c6d7c0f5ea4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-264160": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:a5:ad:7a:8b:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b720c74a0dc38658463082bcb93730b420d57f391d495ecb21d74f5ad35b4f21",
	                    "EndpointID": "4800aba7ded95ed95a56ef1ad4bf1b238d330afe47c91b66c43c80a2794b655c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-264160",
	                        "49717cdd4541"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-264160 -n old-k8s-version-264160
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-264160 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-264160 logs -n 25: (1.208323284s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-156590 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                   │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo docker system info                                                                                                                                                                                                            │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo containerd config dump                                                                                                                                                                                                        │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo crio config                                                                                                                                                                                                                   │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ delete  │ -p cilium-156590                                                                                                                                                                                                                                    │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p cert-expiration-750367 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-750367   │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ ssh     │ force-systemd-env-388402 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-388402 │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ delete  │ -p force-systemd-env-388402                                                                                                                                                                                                                         │ force-systemd-env-388402 │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p cert-options-815306 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-815306      │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ cert-options-815306 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-815306      │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p cert-options-815306 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-815306      │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ delete  │ -p cert-options-815306                                                                                                                                                                                                                              │ cert-options-815306      │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ start   │ -p old-k8s-version-264160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-264160   │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:36 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:35:30
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:35:30.257107  204649 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:35:30.257270  204649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:35:30.257288  204649 out.go:374] Setting ErrFile to fd 2...
	I1119 22:35:30.257293  204649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:35:30.257586  204649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
	I1119 22:35:30.258032  204649 out.go:368] Setting JSON to false
	I1119 22:35:30.259057  204649 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4651,"bootTime":1763587079,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1119 22:35:30.259135  204649 start.go:143] virtualization:  
	I1119 22:35:30.265034  204649 out.go:179] * [old-k8s-version-264160] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:35:30.268600  204649 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:35:30.268654  204649 notify.go:221] Checking for updates...
	I1119 22:35:30.275244  204649 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:35:30.278424  204649 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:35:30.281805  204649 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube
	I1119 22:35:30.285044  204649 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:35:30.288125  204649 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:35:30.291809  204649 config.go:182] Loaded profile config "cert-expiration-750367": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:35:30.291938  204649 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:35:30.328984  204649 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:35:30.329118  204649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:35:30.391514  204649 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 22:35:30.382377652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:35:30.391618  204649 docker.go:319] overlay module found
	I1119 22:35:30.394904  204649 out.go:179] * Using the docker driver based on user configuration
	I1119 22:35:30.397906  204649 start.go:309] selected driver: docker
	I1119 22:35:30.397928  204649 start.go:930] validating driver "docker" against <nil>
	I1119 22:35:30.397942  204649 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:35:30.398744  204649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:35:30.457338  204649 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 22:35:30.447544183 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:35:30.457505  204649 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:35:30.457734  204649 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:35:30.460603  204649 out.go:179] * Using Docker driver with root privileges
	I1119 22:35:30.463555  204649 cni.go:84] Creating CNI manager for ""
	I1119 22:35:30.463623  204649 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:35:30.463636  204649 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:35:30.463716  204649 start.go:353] cluster config:
	{Name:old-k8s-version-264160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-264160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:35:30.466849  204649 out.go:179] * Starting "old-k8s-version-264160" primary control-plane node in "old-k8s-version-264160" cluster
	I1119 22:35:30.469744  204649 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 22:35:30.472743  204649 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:35:30.475730  204649 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1119 22:35:30.475797  204649 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1119 22:35:30.475812  204649 cache.go:65] Caching tarball of preloaded images
	I1119 22:35:30.475815  204649 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:35:30.475897  204649 preload.go:238] Found /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1119 22:35:30.475907  204649 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1119 22:35:30.476103  204649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/config.json ...
	I1119 22:35:30.476142  204649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/config.json: {Name:mka3956cf816ce3f0dc4b41766ded046d7e239b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:30.495142  204649 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:35:30.495164  204649 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:35:30.495178  204649 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:35:30.495202  204649 start.go:360] acquireMachinesLock for old-k8s-version-264160: {Name:mkb1d6d80392c055072776fe42d903323b85b557 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:35:30.495313  204649 start.go:364] duration metric: took 84.916µs to acquireMachinesLock for "old-k8s-version-264160"
	I1119 22:35:30.495346  204649 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-264160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-264160 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:35:30.495417  204649 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:35:30.498755  204649 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:35:30.499000  204649 start.go:159] libmachine.API.Create for "old-k8s-version-264160" (driver="docker")
	I1119 22:35:30.499040  204649 client.go:173] LocalClient.Create starting
	I1119 22:35:30.499112  204649 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem
	I1119 22:35:30.499148  204649 main.go:143] libmachine: Decoding PEM data...
	I1119 22:35:30.499166  204649 main.go:143] libmachine: Parsing certificate...
	I1119 22:35:30.499221  204649 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem
	I1119 22:35:30.499243  204649 main.go:143] libmachine: Decoding PEM data...
	I1119 22:35:30.499252  204649 main.go:143] libmachine: Parsing certificate...
	I1119 22:35:30.499620  204649 cli_runner.go:164] Run: docker network inspect old-k8s-version-264160 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:35:30.514882  204649 cli_runner.go:211] docker network inspect old-k8s-version-264160 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:35:30.514967  204649 network_create.go:284] running [docker network inspect old-k8s-version-264160] to gather additional debugging logs...
	I1119 22:35:30.514989  204649 cli_runner.go:164] Run: docker network inspect old-k8s-version-264160
	W1119 22:35:30.529792  204649 cli_runner.go:211] docker network inspect old-k8s-version-264160 returned with exit code 1
	I1119 22:35:30.529827  204649 network_create.go:287] error running [docker network inspect old-k8s-version-264160]: docker network inspect old-k8s-version-264160: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-264160 not found
	I1119 22:35:30.529841  204649 network_create.go:289] output of [docker network inspect old-k8s-version-264160]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-264160 not found
	
	** /stderr **
	I1119 22:35:30.529955  204649 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:35:30.546966  204649 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b0fa93c84379 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:8f:4f:8f:5a:a3} reservation:<nil>}
	I1119 22:35:30.547286  204649 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-141c656f658f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:30:08:ea:1a:b9} reservation:<nil>}
	I1119 22:35:30.547626  204649 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-aae633a5ffae IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:73:d8:2e:30:94} reservation:<nil>}
	I1119 22:35:30.548050  204649 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019f9110}
	I1119 22:35:30.548074  204649 network_create.go:124] attempt to create docker network old-k8s-version-264160 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1119 22:35:30.548135  204649 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-264160 old-k8s-version-264160
	I1119 22:35:30.612059  204649 network_create.go:108] docker network old-k8s-version-264160 192.168.76.0/24 created
	I1119 22:35:30.612094  204649 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-264160" container
	I1119 22:35:30.612164  204649 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:35:30.629392  204649 cli_runner.go:164] Run: docker volume create old-k8s-version-264160 --label name.minikube.sigs.k8s.io=old-k8s-version-264160 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:35:30.648884  204649 oci.go:103] Successfully created a docker volume old-k8s-version-264160
	I1119 22:35:30.648982  204649 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-264160-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-264160 --entrypoint /usr/bin/test -v old-k8s-version-264160:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:35:31.199519  204649 oci.go:107] Successfully prepared a docker volume old-k8s-version-264160
	I1119 22:35:31.199605  204649 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1119 22:35:31.199622  204649 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 22:35:31.199697  204649 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-264160:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 22:35:36.761404  204649 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-264160:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (5.561655508s)
	I1119 22:35:36.761444  204649 kic.go:203] duration metric: took 5.561818243s to extract preloaded images to volume ...
	W1119 22:35:36.761577  204649 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 22:35:36.761693  204649 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:35:36.815053  204649 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-264160 --name old-k8s-version-264160 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-264160 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-264160 --network old-k8s-version-264160 --ip 192.168.76.2 --volume old-k8s-version-264160:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:35:37.145087  204649 cli_runner.go:164] Run: docker container inspect old-k8s-version-264160 --format={{.State.Running}}
	I1119 22:35:37.171282  204649 cli_runner.go:164] Run: docker container inspect old-k8s-version-264160 --format={{.State.Status}}
	I1119 22:35:37.199972  204649 cli_runner.go:164] Run: docker exec old-k8s-version-264160 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:35:37.254683  204649 oci.go:144] the created container "old-k8s-version-264160" has a running status.
	I1119 22:35:37.254726  204649 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/old-k8s-version-264160/id_rsa...
	I1119 22:35:38.063600  204649 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-2347/.minikube/machines/old-k8s-version-264160/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:35:38.084666  204649 cli_runner.go:164] Run: docker container inspect old-k8s-version-264160 --format={{.State.Status}}
	I1119 22:35:38.103756  204649 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:35:38.103781  204649 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-264160 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:35:38.159199  204649 cli_runner.go:164] Run: docker container inspect old-k8s-version-264160 --format={{.State.Status}}
	I1119 22:35:38.177494  204649 machine.go:94] provisionDockerMachine start ...
	I1119 22:35:38.177599  204649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264160
	I1119 22:35:38.195122  204649 main.go:143] libmachine: Using SSH client type: native
	I1119 22:35:38.195453  204649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1119 22:35:38.195469  204649 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:35:38.196184  204649 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 22:35:41.337849  204649 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-264160
	
	I1119 22:35:41.337872  204649 ubuntu.go:182] provisioning hostname "old-k8s-version-264160"
	I1119 22:35:41.337936  204649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264160
	I1119 22:35:41.356186  204649 main.go:143] libmachine: Using SSH client type: native
	I1119 22:35:41.356488  204649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1119 22:35:41.356501  204649 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-264160 && echo "old-k8s-version-264160" | sudo tee /etc/hostname
	I1119 22:35:41.512063  204649 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-264160
	
	I1119 22:35:41.512155  204649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264160
	I1119 22:35:41.531307  204649 main.go:143] libmachine: Using SSH client type: native
	I1119 22:35:41.531635  204649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1119 22:35:41.531659  204649 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-264160' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-264160/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-264160' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:35:41.674522  204649 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:35:41.674549  204649 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-2347/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-2347/.minikube}
	I1119 22:35:41.674570  204649 ubuntu.go:190] setting up certificates
	I1119 22:35:41.674581  204649 provision.go:84] configureAuth start
	I1119 22:35:41.674640  204649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-264160
	I1119 22:35:41.694614  204649 provision.go:143] copyHostCerts
	I1119 22:35:41.694682  204649 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem, removing ...
	I1119 22:35:41.694696  204649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem
	I1119 22:35:41.694778  204649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem (1675 bytes)
	I1119 22:35:41.694893  204649 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem, removing ...
	I1119 22:35:41.694904  204649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem
	I1119 22:35:41.694933  204649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem (1082 bytes)
	I1119 22:35:41.694994  204649 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem, removing ...
	I1119 22:35:41.695002  204649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem
	I1119 22:35:41.695027  204649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem (1123 bytes)
	I1119 22:35:41.695078  204649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-264160 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-264160]
	I1119 22:35:41.985138  204649 provision.go:177] copyRemoteCerts
	I1119 22:35:41.985210  204649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:35:41.985253  204649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264160
	I1119 22:35:42.011744  204649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/old-k8s-version-264160/id_rsa Username:docker}
	I1119 22:35:42.120462  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1119 22:35:42.153941  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:35:42.177275  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:35:42.199768  204649 provision.go:87] duration metric: took 525.161639ms to configureAuth
	I1119 22:35:42.199797  204649 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:35:42.199999  204649 config.go:182] Loaded profile config "old-k8s-version-264160": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 22:35:42.200014  204649 machine.go:97] duration metric: took 4.022496163s to provisionDockerMachine
	I1119 22:35:42.200022  204649 client.go:176] duration metric: took 11.700970491s to LocalClient.Create
	I1119 22:35:42.200036  204649 start.go:167] duration metric: took 11.70103788s to libmachine.API.Create "old-k8s-version-264160"
	I1119 22:35:42.200044  204649 start.go:293] postStartSetup for "old-k8s-version-264160" (driver="docker")
	I1119 22:35:42.200053  204649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:35:42.200107  204649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:35:42.200153  204649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264160
	I1119 22:35:42.221138  204649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/old-k8s-version-264160/id_rsa Username:docker}
	I1119 22:35:42.326805  204649 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:35:42.330396  204649 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:35:42.330426  204649 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:35:42.330439  204649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/addons for local assets ...
	I1119 22:35:42.330497  204649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/files for local assets ...
	I1119 22:35:42.330585  204649 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem -> 41442.pem in /etc/ssl/certs
	I1119 22:35:42.330694  204649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:35:42.338569  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:35:42.358341  204649 start.go:296] duration metric: took 158.281623ms for postStartSetup
	I1119 22:35:42.358732  204649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-264160
	I1119 22:35:42.376951  204649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/config.json ...
	I1119 22:35:42.377417  204649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:35:42.377467  204649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264160
	I1119 22:35:42.395134  204649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/old-k8s-version-264160/id_rsa Username:docker}
	I1119 22:35:42.495341  204649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:35:42.499972  204649 start.go:128] duration metric: took 12.004539402s to createHost
	I1119 22:35:42.500036  204649 start.go:83] releasing machines lock for "old-k8s-version-264160", held for 12.004707247s
	I1119 22:35:42.500112  204649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-264160
	I1119 22:35:42.517291  204649 ssh_runner.go:195] Run: cat /version.json
	I1119 22:35:42.517425  204649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264160
	I1119 22:35:42.517727  204649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:35:42.517817  204649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264160
	I1119 22:35:42.538882  204649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/old-k8s-version-264160/id_rsa Username:docker}
	I1119 22:35:42.547918  204649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/old-k8s-version-264160/id_rsa Username:docker}
	I1119 22:35:42.646164  204649 ssh_runner.go:195] Run: systemctl --version
	I1119 22:35:42.733875  204649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:35:42.738275  204649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:35:42.738377  204649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:35:42.768357  204649 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 22:35:42.768382  204649 start.go:496] detecting cgroup driver to use...
	I1119 22:35:42.768416  204649 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 22:35:42.768467  204649 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:35:42.786112  204649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:35:42.799389  204649 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:35:42.799458  204649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:35:42.817550  204649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:35:42.837250  204649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:35:42.954428  204649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:35:43.089677  204649 docker.go:234] disabling docker service ...
	I1119 22:35:43.089796  204649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:35:43.119196  204649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:35:43.133883  204649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:35:43.271748  204649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:35:43.403111  204649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:35:43.416605  204649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:35:43.431762  204649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1119 22:35:43.441044  204649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:35:43.450280  204649 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1119 22:35:43.450355  204649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1119 22:35:43.460541  204649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:35:43.469380  204649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:35:43.478023  204649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:35:43.486801  204649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:35:43.495927  204649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:35:43.505431  204649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:35:43.514750  204649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:35:43.524906  204649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:35:43.533562  204649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:35:43.541294  204649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:35:43.666061  204649 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:35:43.801836  204649 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:35:43.801996  204649 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:35:43.807154  204649 start.go:564] Will wait 60s for crictl version
	I1119 22:35:43.807283  204649 ssh_runner.go:195] Run: which crictl
	I1119 22:35:43.810929  204649 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:35:43.840804  204649 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:35:43.840924  204649 ssh_runner.go:195] Run: containerd --version
	I1119 22:35:43.863403  204649 ssh_runner.go:195] Run: containerd --version
	I1119 22:35:43.892718  204649 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1119 22:35:43.895641  204649 cli_runner.go:164] Run: docker network inspect old-k8s-version-264160 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:35:43.912965  204649 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 22:35:43.916790  204649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:35:43.926772  204649 kubeadm.go:884] updating cluster {Name:old-k8s-version-264160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-264160 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:35:43.926887  204649 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1119 22:35:43.926949  204649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:35:43.959370  204649 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:35:43.959391  204649 containerd.go:534] Images already preloaded, skipping extraction
	I1119 22:35:43.959451  204649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:35:43.989251  204649 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:35:43.989276  204649 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:35:43.989284  204649 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 containerd true true} ...
	I1119 22:35:43.989377  204649 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-264160 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-264160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:35:43.989454  204649 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:35:44.018509  204649 cni.go:84] Creating CNI manager for ""
	I1119 22:35:44.018532  204649 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:35:44.018554  204649 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:35:44.018590  204649 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-264160 NodeName:old-k8s-version-264160 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:35:44.018720  204649 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-264160"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:35:44.018791  204649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1119 22:35:44.027774  204649 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:35:44.027843  204649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:35:44.035977  204649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1119 22:35:44.049828  204649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:35:44.063834  204649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1119 22:35:44.078459  204649 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:35:44.082544  204649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:35:44.093549  204649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:35:44.218127  204649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:35:44.238847  204649 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160 for IP: 192.168.76.2
	I1119 22:35:44.238867  204649 certs.go:195] generating shared ca certs ...
	I1119 22:35:44.238885  204649 certs.go:227] acquiring lock for ca certs: {Name:mk76285c445bf14c1e73dedba3201c9181209ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:44.239062  204649 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key
	I1119 22:35:44.239112  204649 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key
	I1119 22:35:44.239124  204649 certs.go:257] generating profile certs ...
	I1119 22:35:44.239186  204649 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.key
	I1119 22:35:44.239203  204649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt with IP's: []
	I1119 22:35:44.811737  204649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt ...
	I1119 22:35:44.811764  204649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt: {Name:mk14e11ecda6c7214508a5ade0f9ee915e780f3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:44.811951  204649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.key ...
	I1119 22:35:44.811960  204649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.key: {Name:mk0adfc8036cdd3c163e4cffd5e262cb5308dfe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:44.812038  204649 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.key.955d0b5b
	I1119 22:35:44.812063  204649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.crt.955d0b5b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1119 22:35:45.101024  204649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.crt.955d0b5b ...
	I1119 22:35:45.101056  204649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.crt.955d0b5b: {Name:mk5142ac1d579327ae160e83fc7f68b0f3557595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:45.101255  204649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.key.955d0b5b ...
	I1119 22:35:45.101267  204649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.key.955d0b5b: {Name:mkc12bee6747eface51cd5e77da3f942ad5e5618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:45.101361  204649 certs.go:382] copying /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.crt.955d0b5b -> /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.crt
	I1119 22:35:45.101462  204649 certs.go:386] copying /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.key.955d0b5b -> /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.key
	I1119 22:35:45.101522  204649 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/proxy-client.key
	I1119 22:35:45.101539  204649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/proxy-client.crt with IP's: []
	I1119 22:35:45.832941  204649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/proxy-client.crt ...
	I1119 22:35:45.832971  204649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/proxy-client.crt: {Name:mk306cbc09a8a4cdf49bd23a7f735885d2e6d6d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:45.833166  204649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/proxy-client.key ...
	I1119 22:35:45.833185  204649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/proxy-client.key: {Name:mk51455941ef13941a00f8719c0c4a50b2eaa3aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:45.833395  204649 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem (1338 bytes)
	W1119 22:35:45.833433  204649 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144_empty.pem, impossibly tiny 0 bytes
	I1119 22:35:45.833442  204649 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:35:45.833468  204649 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:35:45.833497  204649 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:35:45.833529  204649 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem (1675 bytes)
	I1119 22:35:45.833577  204649 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:35:45.834165  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:35:45.856349  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1119 22:35:45.877913  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:35:45.896516  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:35:45.914586  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1119 22:35:45.933361  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:35:45.951038  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:35:45.973047  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:35:45.994027  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:35:46.025730  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem --> /usr/share/ca-certificates/4144.pem (1338 bytes)
	I1119 22:35:46.045750  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /usr/share/ca-certificates/41442.pem (1708 bytes)
	I1119 22:35:46.073629  204649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:35:46.087614  204649 ssh_runner.go:195] Run: openssl version
	I1119 22:35:46.094872  204649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:35:46.103931  204649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:35:46.108400  204649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:35:46.108519  204649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:35:46.165543  204649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:35:46.174470  204649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4144.pem && ln -fs /usr/share/ca-certificates/4144.pem /etc/ssl/certs/4144.pem"
	I1119 22:35:46.182680  204649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4144.pem
	I1119 22:35:46.186577  204649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/4144.pem
	I1119 22:35:46.186637  204649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4144.pem
	I1119 22:35:46.228043  204649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4144.pem /etc/ssl/certs/51391683.0"
	I1119 22:35:46.236269  204649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41442.pem && ln -fs /usr/share/ca-certificates/41442.pem /etc/ssl/certs/41442.pem"
	I1119 22:35:46.244687  204649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41442.pem
	I1119 22:35:46.248576  204649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/41442.pem
	I1119 22:35:46.248696  204649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41442.pem
	I1119 22:35:46.290804  204649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41442.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:35:46.299091  204649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:35:46.302689  204649 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:35:46.302790  204649 kubeadm.go:401] StartCluster: {Name:old-k8s-version-264160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-264160 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:35:46.302872  204649 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:35:46.302930  204649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:35:46.341874  204649 cri.go:89] found id: ""
	I1119 22:35:46.341955  204649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:35:46.349861  204649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:35:46.358624  204649 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:35:46.358700  204649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:35:46.366859  204649 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:35:46.366882  204649 kubeadm.go:158] found existing configuration files:
	
	I1119 22:35:46.366956  204649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 22:35:46.375053  204649 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:35:46.375118  204649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:35:46.382569  204649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 22:35:46.390549  204649 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:35:46.390660  204649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:35:46.398378  204649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 22:35:46.406002  204649 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:35:46.406127  204649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:35:46.414558  204649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 22:35:46.422462  204649 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:35:46.422528  204649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:35:46.430234  204649 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:35:46.480821  204649 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1119 22:35:46.480973  204649 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:35:46.518306  204649 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:35:46.518408  204649 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 22:35:46.518469  204649 kubeadm.go:319] OS: Linux
	I1119 22:35:46.518555  204649 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:35:46.518627  204649 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 22:35:46.518704  204649 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:35:46.518775  204649 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:35:46.518848  204649 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:35:46.518928  204649 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:35:46.518993  204649 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:35:46.519065  204649 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:35:46.519136  204649 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 22:35:46.603387  204649 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:35:46.603532  204649 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:35:46.603659  204649 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1119 22:35:46.748614  204649 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:35:46.754520  204649 out.go:252]   - Generating certificates and keys ...
	I1119 22:35:46.754636  204649 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:35:46.754728  204649 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:35:47.362621  204649 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:35:47.861152  204649 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:35:48.578567  204649 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:35:48.709308  204649 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:35:49.572586  204649 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:35:49.572742  204649 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-264160] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:35:50.286968  204649 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:35:50.287350  204649 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-264160] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:35:50.729163  204649 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:35:51.087355  204649 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:35:51.301494  204649 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:35:51.301799  204649 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:35:52.439151  204649 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:35:52.767854  204649 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:35:53.170174  204649 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:35:53.873745  204649 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:35:53.874592  204649 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:35:53.877867  204649 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:35:53.883494  204649 out.go:252]   - Booting up control plane ...
	I1119 22:35:53.883605  204649 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:35:53.883687  204649 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:35:53.883756  204649 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:35:53.900950  204649 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:35:53.901278  204649 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:35:53.901523  204649 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:35:54.050697  204649 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1119 22:36:04.052724  204649 kubeadm.go:319] [apiclient] All control plane components are healthy after 10.003761 seconds
	I1119 22:36:04.052869  204649 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:36:04.072130  204649 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:36:04.605781  204649 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:36:04.606002  204649 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-264160 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:36:05.122165  204649 kubeadm.go:319] [bootstrap-token] Using token: t3hgjm.t27pk8uf8r4mqrko
	I1119 22:36:05.125207  204649 out.go:252]   - Configuring RBAC rules ...
	I1119 22:36:05.125347  204649 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:36:05.138372  204649 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:36:05.149292  204649 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:36:05.153962  204649 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:36:05.159111  204649 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:36:05.163924  204649 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:36:05.183969  204649 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:36:05.490668  204649 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:36:05.544743  204649 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:36:05.545712  204649 kubeadm.go:319] 
	I1119 22:36:05.545794  204649 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:36:05.545800  204649 kubeadm.go:319] 
	I1119 22:36:05.545881  204649 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:36:05.545886  204649 kubeadm.go:319] 
	I1119 22:36:05.545912  204649 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:36:05.545975  204649 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:36:05.546029  204649 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:36:05.546036  204649 kubeadm.go:319] 
	I1119 22:36:05.546092  204649 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:36:05.546097  204649 kubeadm.go:319] 
	I1119 22:36:05.546192  204649 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:36:05.546198  204649 kubeadm.go:319] 
	I1119 22:36:05.546252  204649 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:36:05.546330  204649 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:36:05.546401  204649 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:36:05.546405  204649 kubeadm.go:319] 
	I1119 22:36:05.546493  204649 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:36:05.546572  204649 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:36:05.546577  204649 kubeadm.go:319] 
	I1119 22:36:05.546665  204649 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token t3hgjm.t27pk8uf8r4mqrko \
	I1119 22:36:05.546773  204649 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f3dc8233c963d7fa33b7a72da6102de3e0dbc1bf6e99b77f8426922389e565f9 \
	I1119 22:36:05.546794  204649 kubeadm.go:319] 	--control-plane 
	I1119 22:36:05.546798  204649 kubeadm.go:319] 
	I1119 22:36:05.546886  204649 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:36:05.546890  204649 kubeadm.go:319] 
	I1119 22:36:05.546975  204649 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token t3hgjm.t27pk8uf8r4mqrko \
	I1119 22:36:05.547080  204649 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f3dc8233c963d7fa33b7a72da6102de3e0dbc1bf6e99b77f8426922389e565f9 
	I1119 22:36:05.551148  204649 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 22:36:05.551265  204649 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:36:05.551281  204649 cni.go:84] Creating CNI manager for ""
	I1119 22:36:05.551288  204649 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:36:05.554507  204649 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:36:05.557507  204649 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:36:05.576310  204649 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1119 22:36:05.576331  204649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:36:05.593718  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:36:06.658889  204649 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.065138821s)
	I1119 22:36:06.658975  204649 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:36:06.659094  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:06.659175  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-264160 minikube.k8s.io/updated_at=2025_11_19T22_36_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=old-k8s-version-264160 minikube.k8s.io/primary=true
	I1119 22:36:06.818009  204649 ops.go:34] apiserver oom_adj: -16
	I1119 22:36:06.818101  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:07.318669  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:07.818290  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:08.318653  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:08.818829  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:09.318705  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:09.818670  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:10.318656  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:10.818343  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:11.318742  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:11.818660  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:12.318643  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:12.818204  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:13.318233  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:13.818478  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:14.318102  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:14.818178  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:15.318224  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:15.818601  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:16.319007  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:16.818836  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:17.318883  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:17.818083  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:18.005461  204649 kubeadm.go:1114] duration metric: took 11.346407343s to wait for elevateKubeSystemPrivileges
	I1119 22:36:18.005498  204649 kubeadm.go:403] duration metric: took 31.702712181s to StartCluster
	I1119 22:36:18.005516  204649 settings.go:142] acquiring lock: {Name:mk5c8f7d46662d574c7e53cf7b09709855a1e14f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:36:18.005603  204649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:36:18.006647  204649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/kubeconfig: {Name:mk670f88d9cb1be22f05f7db4ddcfb97af791e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:36:18.006944  204649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:36:18.006951  204649 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:36:18.007274  204649 config.go:182] Loaded profile config "old-k8s-version-264160": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 22:36:18.007313  204649 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:36:18.007401  204649 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-264160"
	I1119 22:36:18.007419  204649 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-264160"
	I1119 22:36:18.007444  204649 host.go:66] Checking if "old-k8s-version-264160" exists ...
	I1119 22:36:18.007919  204649 cli_runner.go:164] Run: docker container inspect old-k8s-version-264160 --format={{.State.Status}}
	I1119 22:36:18.008446  204649 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-264160"
	I1119 22:36:18.008469  204649 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-264160"
	I1119 22:36:18.008780  204649 cli_runner.go:164] Run: docker container inspect old-k8s-version-264160 --format={{.State.Status}}
	I1119 22:36:18.011866  204649 out.go:179] * Verifying Kubernetes components...
	I1119 22:36:18.014838  204649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:36:18.055880  204649 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:36:18.056763  204649 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-264160"
	I1119 22:36:18.056800  204649 host.go:66] Checking if "old-k8s-version-264160" exists ...
	I1119 22:36:18.057242  204649 cli_runner.go:164] Run: docker container inspect old-k8s-version-264160 --format={{.State.Status}}
	I1119 22:36:18.059443  204649 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:36:18.059467  204649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:36:18.059527  204649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264160
	I1119 22:36:18.093613  204649 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:36:18.093726  204649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/old-k8s-version-264160/id_rsa Username:docker}
	I1119 22:36:18.095300  204649 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:36:18.095428  204649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264160
	I1119 22:36:18.135800  204649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/old-k8s-version-264160/id_rsa Username:docker}
	I1119 22:36:18.357324  204649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:36:18.357451  204649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:36:18.439741  204649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:36:18.443940  204649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:36:19.165631  204649 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-264160" to be "Ready" ...
	I1119 22:36:19.165952  204649 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 22:36:19.668262  204649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.228448448s)
	I1119 22:36:19.668305  204649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.224346607s)
	I1119 22:36:19.682930  204649 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-264160" context rescaled to 1 replicas
	I1119 22:36:19.691208  204649 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 22:36:19.694506  204649 addons.go:515] duration metric: took 1.687167131s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1119 22:36:21.170389  204649 node_ready.go:57] node "old-k8s-version-264160" has "Ready":"False" status (will retry)
	W1119 22:36:23.669181  204649 node_ready.go:57] node "old-k8s-version-264160" has "Ready":"False" status (will retry)
	W1119 22:36:26.169468  204649 node_ready.go:57] node "old-k8s-version-264160" has "Ready":"False" status (will retry)
	W1119 22:36:28.668771  204649 node_ready.go:57] node "old-k8s-version-264160" has "Ready":"False" status (will retry)
	W1119 22:36:30.669387  204649 node_ready.go:57] node "old-k8s-version-264160" has "Ready":"False" status (will retry)
	I1119 22:36:31.179436  204649 node_ready.go:49] node "old-k8s-version-264160" is "Ready"
	I1119 22:36:31.179462  204649 node_ready.go:38] duration metric: took 12.013798629s for node "old-k8s-version-264160" to be "Ready" ...
	I1119 22:36:31.179475  204649 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:36:31.179538  204649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:36:31.199071  204649 api_server.go:72] duration metric: took 13.192088991s to wait for apiserver process to appear ...
	I1119 22:36:31.199094  204649 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:36:31.199116  204649 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:36:31.209770  204649 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 22:36:31.211739  204649 api_server.go:141] control plane version: v1.28.0
	I1119 22:36:31.211767  204649 api_server.go:131] duration metric: took 12.666386ms to wait for apiserver health ...
	I1119 22:36:31.211777  204649 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:36:31.216012  204649 system_pods.go:59] 8 kube-system pods found
	I1119 22:36:31.216054  204649 system_pods.go:61] "coredns-5dd5756b68-vz7zx" [7e7645ad-49a9-4f0c-89cc-128538e4d95c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:36:31.216062  204649 system_pods.go:61] "etcd-old-k8s-version-264160" [1bd42d38-2921-483d-b656-d1f12178141b] Running
	I1119 22:36:31.216068  204649 system_pods.go:61] "kindnet-m9nqq" [2f9f6fbb-c725-49fd-ba3a-c84a7640aac2] Running
	I1119 22:36:31.216073  204649 system_pods.go:61] "kube-apiserver-old-k8s-version-264160" [454724a2-4fd6-4dc1-9cc1-a4b60944a9df] Running
	I1119 22:36:31.216084  204649 system_pods.go:61] "kube-controller-manager-old-k8s-version-264160" [a5ad5849-09a1-43bd-861a-8c92712b0a14] Running
	I1119 22:36:31.216088  204649 system_pods.go:61] "kube-proxy-zzmnr" [3ee1645f-fba5-4206-bb83-70d298a4c5ac] Running
	I1119 22:36:31.216100  204649 system_pods.go:61] "kube-scheduler-old-k8s-version-264160" [fbad20e1-7729-4503-b929-bc32986a00e8] Running
	I1119 22:36:31.216106  204649 system_pods.go:61] "storage-provisioner" [8e2dda77-5a6d-4796-926b-5a06158f8cdf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:36:31.216112  204649 system_pods.go:74] duration metric: took 4.329001ms to wait for pod list to return data ...
	I1119 22:36:31.216127  204649 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:36:31.219246  204649 default_sa.go:45] found service account: "default"
	I1119 22:36:31.219283  204649 default_sa.go:55] duration metric: took 3.150461ms for default service account to be created ...
	I1119 22:36:31.219293  204649 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:36:31.226730  204649 system_pods.go:86] 8 kube-system pods found
	I1119 22:36:31.226780  204649 system_pods.go:89] "coredns-5dd5756b68-vz7zx" [7e7645ad-49a9-4f0c-89cc-128538e4d95c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:36:31.226788  204649 system_pods.go:89] "etcd-old-k8s-version-264160" [1bd42d38-2921-483d-b656-d1f12178141b] Running
	I1119 22:36:31.226795  204649 system_pods.go:89] "kindnet-m9nqq" [2f9f6fbb-c725-49fd-ba3a-c84a7640aac2] Running
	I1119 22:36:31.226801  204649 system_pods.go:89] "kube-apiserver-old-k8s-version-264160" [454724a2-4fd6-4dc1-9cc1-a4b60944a9df] Running
	I1119 22:36:31.226820  204649 system_pods.go:89] "kube-controller-manager-old-k8s-version-264160" [a5ad5849-09a1-43bd-861a-8c92712b0a14] Running
	I1119 22:36:31.226840  204649 system_pods.go:89] "kube-proxy-zzmnr" [3ee1645f-fba5-4206-bb83-70d298a4c5ac] Running
	I1119 22:36:31.226854  204649 system_pods.go:89] "kube-scheduler-old-k8s-version-264160" [fbad20e1-7729-4503-b929-bc32986a00e8] Running
	I1119 22:36:31.226880  204649 system_pods.go:89] "storage-provisioner" [8e2dda77-5a6d-4796-926b-5a06158f8cdf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:36:31.226914  204649 retry.go:31] will retry after 302.789316ms: missing components: kube-dns
	I1119 22:36:31.534752  204649 system_pods.go:86] 8 kube-system pods found
	I1119 22:36:31.534798  204649 system_pods.go:89] "coredns-5dd5756b68-vz7zx" [7e7645ad-49a9-4f0c-89cc-128538e4d95c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:36:31.534805  204649 system_pods.go:89] "etcd-old-k8s-version-264160" [1bd42d38-2921-483d-b656-d1f12178141b] Running
	I1119 22:36:31.534811  204649 system_pods.go:89] "kindnet-m9nqq" [2f9f6fbb-c725-49fd-ba3a-c84a7640aac2] Running
	I1119 22:36:31.534815  204649 system_pods.go:89] "kube-apiserver-old-k8s-version-264160" [454724a2-4fd6-4dc1-9cc1-a4b60944a9df] Running
	I1119 22:36:31.534821  204649 system_pods.go:89] "kube-controller-manager-old-k8s-version-264160" [a5ad5849-09a1-43bd-861a-8c92712b0a14] Running
	I1119 22:36:31.534825  204649 system_pods.go:89] "kube-proxy-zzmnr" [3ee1645f-fba5-4206-bb83-70d298a4c5ac] Running
	I1119 22:36:31.534829  204649 system_pods.go:89] "kube-scheduler-old-k8s-version-264160" [fbad20e1-7729-4503-b929-bc32986a00e8] Running
	I1119 22:36:31.534838  204649 system_pods.go:89] "storage-provisioner" [8e2dda77-5a6d-4796-926b-5a06158f8cdf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:36:31.534852  204649 retry.go:31] will retry after 260.752212ms: missing components: kube-dns
	I1119 22:36:31.802433  204649 system_pods.go:86] 8 kube-system pods found
	I1119 22:36:31.802477  204649 system_pods.go:89] "coredns-5dd5756b68-vz7zx" [7e7645ad-49a9-4f0c-89cc-128538e4d95c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:36:31.802484  204649 system_pods.go:89] "etcd-old-k8s-version-264160" [1bd42d38-2921-483d-b656-d1f12178141b] Running
	I1119 22:36:31.802492  204649 system_pods.go:89] "kindnet-m9nqq" [2f9f6fbb-c725-49fd-ba3a-c84a7640aac2] Running
	I1119 22:36:31.802496  204649 system_pods.go:89] "kube-apiserver-old-k8s-version-264160" [454724a2-4fd6-4dc1-9cc1-a4b60944a9df] Running
	I1119 22:36:31.802502  204649 system_pods.go:89] "kube-controller-manager-old-k8s-version-264160" [a5ad5849-09a1-43bd-861a-8c92712b0a14] Running
	I1119 22:36:31.802506  204649 system_pods.go:89] "kube-proxy-zzmnr" [3ee1645f-fba5-4206-bb83-70d298a4c5ac] Running
	I1119 22:36:31.802510  204649 system_pods.go:89] "kube-scheduler-old-k8s-version-264160" [fbad20e1-7729-4503-b929-bc32986a00e8] Running
	I1119 22:36:31.802517  204649 system_pods.go:89] "storage-provisioner" [8e2dda77-5a6d-4796-926b-5a06158f8cdf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:36:31.802540  204649 retry.go:31] will retry after 341.00697ms: missing components: kube-dns
	I1119 22:36:32.148247  204649 system_pods.go:86] 8 kube-system pods found
	I1119 22:36:32.148281  204649 system_pods.go:89] "coredns-5dd5756b68-vz7zx" [7e7645ad-49a9-4f0c-89cc-128538e4d95c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:36:32.148298  204649 system_pods.go:89] "etcd-old-k8s-version-264160" [1bd42d38-2921-483d-b656-d1f12178141b] Running
	I1119 22:36:32.148304  204649 system_pods.go:89] "kindnet-m9nqq" [2f9f6fbb-c725-49fd-ba3a-c84a7640aac2] Running
	I1119 22:36:32.148309  204649 system_pods.go:89] "kube-apiserver-old-k8s-version-264160" [454724a2-4fd6-4dc1-9cc1-a4b60944a9df] Running
	I1119 22:36:32.148314  204649 system_pods.go:89] "kube-controller-manager-old-k8s-version-264160" [a5ad5849-09a1-43bd-861a-8c92712b0a14] Running
	I1119 22:36:32.148320  204649 system_pods.go:89] "kube-proxy-zzmnr" [3ee1645f-fba5-4206-bb83-70d298a4c5ac] Running
	I1119 22:36:32.148329  204649 system_pods.go:89] "kube-scheduler-old-k8s-version-264160" [fbad20e1-7729-4503-b929-bc32986a00e8] Running
	I1119 22:36:32.148333  204649 system_pods.go:89] "storage-provisioner" [8e2dda77-5a6d-4796-926b-5a06158f8cdf] Running
	I1119 22:36:32.148348  204649 system_pods.go:126] duration metric: took 929.047421ms to wait for k8s-apps to be running ...
	I1119 22:36:32.148356  204649 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:36:32.148423  204649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:36:32.175720  204649 system_svc.go:56] duration metric: took 27.353086ms WaitForService to wait for kubelet
	I1119 22:36:32.175754  204649 kubeadm.go:587] duration metric: took 14.168776732s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:36:32.175782  204649 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:36:32.178856  204649 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:36:32.178889  204649 node_conditions.go:123] node cpu capacity is 2
	I1119 22:36:32.178903  204649 node_conditions.go:105] duration metric: took 3.115367ms to run NodePressure ...
	I1119 22:36:32.178915  204649 start.go:242] waiting for startup goroutines ...
	I1119 22:36:32.178933  204649 start.go:247] waiting for cluster config update ...
	I1119 22:36:32.178949  204649 start.go:256] writing updated cluster config ...
	I1119 22:36:32.179275  204649 ssh_runner.go:195] Run: rm -f paused
	I1119 22:36:32.186678  204649 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:36:32.192039  204649 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vz7zx" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 22:36:34.198510  204649 pod_ready.go:104] pod "coredns-5dd5756b68-vz7zx" is not "Ready", error: <nil>
	W1119 22:36:36.198937  204649 pod_ready.go:104] pod "coredns-5dd5756b68-vz7zx" is not "Ready", error: <nil>
	W1119 22:36:38.698791  204649 pod_ready.go:104] pod "coredns-5dd5756b68-vz7zx" is not "Ready", error: <nil>
	W1119 22:36:41.198015  204649 pod_ready.go:104] pod "coredns-5dd5756b68-vz7zx" is not "Ready", error: <nil>
	I1119 22:36:41.698204  204649 pod_ready.go:94] pod "coredns-5dd5756b68-vz7zx" is "Ready"
	I1119 22:36:41.698233  204649 pod_ready.go:86] duration metric: took 9.50616482s for pod "coredns-5dd5756b68-vz7zx" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:41.701276  204649 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-264160" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:41.706418  204649 pod_ready.go:94] pod "etcd-old-k8s-version-264160" is "Ready"
	I1119 22:36:41.706451  204649 pod_ready.go:86] duration metric: took 5.148925ms for pod "etcd-old-k8s-version-264160" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:41.709706  204649 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-264160" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:41.715470  204649 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-264160" is "Ready"
	I1119 22:36:41.715499  204649 pod_ready.go:86] duration metric: took 5.766499ms for pod "kube-apiserver-old-k8s-version-264160" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:41.718802  204649 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-264160" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:41.896506  204649 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-264160" is "Ready"
	I1119 22:36:41.896538  204649 pod_ready.go:86] duration metric: took 177.710699ms for pod "kube-controller-manager-old-k8s-version-264160" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:42.096924  204649 pod_ready.go:83] waiting for pod "kube-proxy-zzmnr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:42.496606  204649 pod_ready.go:94] pod "kube-proxy-zzmnr" is "Ready"
	I1119 22:36:42.496635  204649 pod_ready.go:86] duration metric: took 399.679699ms for pod "kube-proxy-zzmnr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:42.696640  204649 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-264160" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:43.096504  204649 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-264160" is "Ready"
	I1119 22:36:43.096533  204649 pod_ready.go:86] duration metric: took 399.863388ms for pod "kube-scheduler-old-k8s-version-264160" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:43.096547  204649 pod_ready.go:40] duration metric: took 10.90982149s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:36:43.158402  204649 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1119 22:36:43.161490  204649 out.go:203] 
	W1119 22:36:43.164427  204649 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 22:36:43.167321  204649 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 22:36:43.171088  204649 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-264160" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	85ec8d942d110       1611cd07b61d5       7 seconds ago       Running             busybox                   0                   1b11528fdca0b       busybox                                          default
	0b5a95c859ac3       97e04611ad434       21 seconds ago      Running             coredns                   0                   aa34d2193fc4c       coredns-5dd5756b68-vz7zx                         kube-system
	f62b743b6725e       ba04bb24b9575       21 seconds ago      Running             storage-provisioner       0                   2bededbe57122       storage-provisioner                              kube-system
	3dc4045566ee8       b1a8c6f707935       33 seconds ago      Running             kindnet-cni               0                   232dd2b4b80b5       kindnet-m9nqq                                    kube-system
	e5c22c9877dd1       940f54a5bcae9       35 seconds ago      Running             kube-proxy                0                   f45778acb4883       kube-proxy-zzmnr                                 kube-system
	0aa1bd28b6073       762dce4090c5f       57 seconds ago      Running             kube-scheduler            0                   4b7124d3d4b79       kube-scheduler-old-k8s-version-264160            kube-system
	83a25278b16a7       00543d2fe5d71       57 seconds ago      Running             kube-apiserver            0                   67f5df81322ce       kube-apiserver-old-k8s-version-264160            kube-system
	9ce9313d9aae4       46cc66ccc7c19       57 seconds ago      Running             kube-controller-manager   0                   0783ca7945d35       kube-controller-manager-old-k8s-version-264160   kube-system
	85f86fccea082       9cdd6470f48c8       57 seconds ago      Running             etcd                      0                   4969a45c845f9       etcd-old-k8s-version-264160                      kube-system
	
	
	==> containerd <==
	Nov 19 22:36:31 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:31.649388743Z" level=info msg="CreateContainer within sandbox \"aa34d2193fc4cf037239bc48a6fac96674b060cb63b8de7320bb53007ec52479\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 19 22:36:31 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:31.651192399Z" level=info msg="connecting to shim f62b743b6725ec9ff1e91e664da6c9ce15d837afbab3608cc02fec3c9bd3d929" address="unix:///run/containerd/s/1693fe8eea8ad33a7610805dc3ed40de55c61613614162362386d2386e86ea05" protocol=ttrpc version=3
	Nov 19 22:36:31 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:31.675391127Z" level=info msg="Container 0b5a95c859ac383d11c4aa9fb013d9cb4c21b0ac201d6a26cc3ec130b9027e9f: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:36:31 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:31.690874899Z" level=info msg="CreateContainer within sandbox \"aa34d2193fc4cf037239bc48a6fac96674b060cb63b8de7320bb53007ec52479\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b5a95c859ac383d11c4aa9fb013d9cb4c21b0ac201d6a26cc3ec130b9027e9f\""
	Nov 19 22:36:31 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:31.691891617Z" level=info msg="StartContainer for \"0b5a95c859ac383d11c4aa9fb013d9cb4c21b0ac201d6a26cc3ec130b9027e9f\""
	Nov 19 22:36:31 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:31.692987039Z" level=info msg="connecting to shim 0b5a95c859ac383d11c4aa9fb013d9cb4c21b0ac201d6a26cc3ec130b9027e9f" address="unix:///run/containerd/s/ba4b3d499342aaf3ebd6be16fa5ad2a140167ea49a534a0a812a3977c5dcf983" protocol=ttrpc version=3
	Nov 19 22:36:31 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:31.744679204Z" level=info msg="StartContainer for \"f62b743b6725ec9ff1e91e664da6c9ce15d837afbab3608cc02fec3c9bd3d929\" returns successfully"
	Nov 19 22:36:31 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:31.792512880Z" level=info msg="StartContainer for \"0b5a95c859ac383d11c4aa9fb013d9cb4c21b0ac201d6a26cc3ec130b9027e9f\" returns successfully"
	Nov 19 22:36:43 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:43.710209778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:2af6deb4-937f-4b9b-9de6-995e75a080b8,Namespace:default,Attempt:0,}"
	Nov 19 22:36:43 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:43.790459559Z" level=info msg="connecting to shim 1b11528fdca0ba74e5c7786578d6850eb5b37f9540b5e04e610639ce7fbd811f" address="unix:///run/containerd/s/b036a265eb01a921a8d2ed1a42211f4774df4a741b42e6007a96fa06394b6381" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 22:36:43 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:43.849747951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:2af6deb4-937f-4b9b-9de6-995e75a080b8,Namespace:default,Attempt:0,} returns sandbox id \"1b11528fdca0ba74e5c7786578d6850eb5b37f9540b5e04e610639ce7fbd811f\""
	Nov 19 22:36:43 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:43.854324085Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.047353549Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.049424358Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937184"
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.052705979Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.058110078Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.059158943Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.204520106s"
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.059209750Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.063270047Z" level=info msg="CreateContainer within sandbox \"1b11528fdca0ba74e5c7786578d6850eb5b37f9540b5e04e610639ce7fbd811f\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.078717460Z" level=info msg="Container 85ec8d942d1102ad7f23f0923c0afa921c51c4b09ac0f93dc33203a257d7ca57: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.090943881Z" level=info msg="CreateContainer within sandbox \"1b11528fdca0ba74e5c7786578d6850eb5b37f9540b5e04e610639ce7fbd811f\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"85ec8d942d1102ad7f23f0923c0afa921c51c4b09ac0f93dc33203a257d7ca57\""
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.091676862Z" level=info msg="StartContainer for \"85ec8d942d1102ad7f23f0923c0afa921c51c4b09ac0f93dc33203a257d7ca57\""
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.092550045Z" level=info msg="connecting to shim 85ec8d942d1102ad7f23f0923c0afa921c51c4b09ac0f93dc33203a257d7ca57" address="unix:///run/containerd/s/b036a265eb01a921a8d2ed1a42211f4774df4a741b42e6007a96fa06394b6381" protocol=ttrpc version=3
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.167983720Z" level=info msg="StartContainer for \"85ec8d942d1102ad7f23f0923c0afa921c51c4b09ac0f93dc33203a257d7ca57\" returns successfully"
	Nov 19 22:36:52 old-k8s-version-264160 containerd[760]: E1119 22:36:52.581929     760 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [0b5a95c859ac383d11c4aa9fb013d9cb4c21b0ac201d6a26cc3ec130b9027e9f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51463 - 23570 "HINFO IN 6404155507127924057.1273287447177964912. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026393207s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-264160
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-264160
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=old-k8s-version-264160
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_36_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:36:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-264160
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:36:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:36:36 +0000   Wed, 19 Nov 2025 22:35:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:36:36 +0000   Wed, 19 Nov 2025 22:35:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:36:36 +0000   Wed, 19 Nov 2025 22:35:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:36:36 +0000   Wed, 19 Nov 2025 22:36:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-264160
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                b680c3d2-ce1c-409c-bfdc-4a24b39315bd
	  Boot ID:                    b3875353-65b3-44b7-ad72-afadd7e2486a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-vz7zx                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     36s
	  kube-system                 etcd-old-k8s-version-264160                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         48s
	  kube-system                 kindnet-m9nqq                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      36s
	  kube-system                 kube-apiserver-old-k8s-version-264160             250m (12%)    0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 kube-controller-manager-old-k8s-version-264160    200m (10%)    0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-proxy-zzmnr                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-scheduler-old-k8s-version-264160             100m (5%)     0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 34s                kube-proxy       
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node old-k8s-version-264160 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node old-k8s-version-264160 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x7 over 58s)  kubelet          Node old-k8s-version-264160 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  58s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 48s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  48s                kubelet          Node old-k8s-version-264160 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s                kubelet          Node old-k8s-version-264160 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s                kubelet          Node old-k8s-version-264160 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  48s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           37s                node-controller  Node old-k8s-version-264160 event: Registered Node old-k8s-version-264160 in Controller
	  Normal  NodeReady                22s                kubelet          Node old-k8s-version-264160 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 21:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.032038] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[Nov19 21:18] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034282] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.730183] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.763794] kauditd_printk_skb: 36 callbacks suppressed
	[Nov19 21:50] hrtimer: interrupt took 11278311 ns
	
	
	==> etcd [85f86fccea0828d06ebe49ecd748897b5c79764ef02605e9b0dcfe4d0da55086] <==
	{"level":"info","ts":"2025-11-19T22:35:56.498496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-19T22:35:56.498586Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-19T22:35:56.499212Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-19T22:35:56.499356Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-19T22:35:56.49937Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-19T22:35:56.500029Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-19T22:35:56.500058Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-19T22:35:57.378189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-19T22:35:57.378405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-19T22:35:57.378513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-19T22:35:57.378604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-19T22:35:57.378648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-19T22:35:57.378765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-19T22:35:57.378861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-19T22:35:57.380547Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:35:57.381686Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-264160 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-19T22:35:57.381779Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T22:35:57.385726Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:35:57.385955Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:35:57.386049Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:35:57.386901Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-19T22:35:57.38702Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T22:35:57.387495Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-19T22:35:57.38756Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-19T22:35:57.38824Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 22:36:53 up  1:18,  0 user,  load average: 2.22, 3.50, 2.75
	Linux old-k8s-version-264160 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3dc4045566ee801891a80913f3c0d08405af235938655312d13ffdb5bece221c] <==
	I1119 22:36:20.789101       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:36:20.789364       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 22:36:20.789559       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:36:20.789578       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:36:20.789592       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:36:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:36:20.990706       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:36:20.990731       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:36:20.990740       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:36:20.992039       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:36:21.190870       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:36:21.190976       1 metrics.go:72] Registering metrics
	I1119 22:36:21.191093       1 controller.go:711] "Syncing nftables rules"
	I1119 22:36:30.994216       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:36:30.994256       1 main.go:301] handling current node
	I1119 22:36:40.992854       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:36:40.992893       1 main.go:301] handling current node
	I1119 22:36:50.992386       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:36:50.992422       1 main.go:301] handling current node
	
	
	==> kube-apiserver [83a25278b16a7bc6a4252ba6f8c2ce8a60621e9d435c828ededf66aecfda2443] <==
	I1119 22:36:02.053875       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1119 22:36:02.055155       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1119 22:36:02.055381       1 shared_informer.go:318] Caches are synced for configmaps
	I1119 22:36:02.055593       1 aggregator.go:166] initial CRD sync complete...
	I1119 22:36:02.055613       1 autoregister_controller.go:141] Starting autoregister controller
	I1119 22:36:02.055620       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:36:02.055627       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:36:02.066246       1 controller.go:624] quota admission added evaluator for: namespaces
	I1119 22:36:02.090717       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1119 22:36:02.094391       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:36:02.747612       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:36:02.754129       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:36:02.754179       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:36:03.457051       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:36:03.510012       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:36:03.578204       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:36:03.591054       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 22:36:03.592389       1 controller.go:624] quota admission added evaluator for: endpoints
	I1119 22:36:03.598109       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:36:03.932055       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1119 22:36:05.470569       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1119 22:36:05.488449       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:36:05.503361       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1119 22:36:17.195970       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1119 22:36:17.744558       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [9ce9313d9aae43100c1f669a0216b1ce028ec3fd90f9042e2780602b3b9dabcf] <==
	I1119 22:36:17.007432       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-old-k8s-version-264160" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1119 22:36:17.007693       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-old-k8s-version-264160" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1119 22:36:17.202717       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1119 22:36:17.309848       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 22:36:17.309885       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1119 22:36:17.345695       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 22:36:17.758353       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-m9nqq"
	I1119 22:36:17.771209       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zzmnr"
	I1119 22:36:17.833691       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vz7zx"
	I1119 22:36:17.844755       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-qtkkx"
	I1119 22:36:17.870437       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="668.610241ms"
	I1119 22:36:17.886833       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.893452ms"
	I1119 22:36:17.887202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="257.726µs"
	I1119 22:36:17.895692       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.147µs"
	I1119 22:36:19.212001       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1119 22:36:19.246883       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-qtkkx"
	I1119 22:36:19.269962       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.063512ms"
	I1119 22:36:19.286597       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.59033ms"
	I1119 22:36:19.287055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="114.554µs"
	I1119 22:36:31.144412       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.433µs"
	I1119 22:36:31.166398       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.171µs"
	I1119 22:36:31.900825       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="227.679µs"
	I1119 22:36:31.988585       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1119 22:36:41.472572       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.52455ms"
	I1119 22:36:41.472677       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.621µs"
	
	
	==> kube-proxy [e5c22c9877dd10241d18184894e9e614c72ec9cfb5a007bdae07416884620fcb] <==
	I1119 22:36:18.757438       1 server_others.go:69] "Using iptables proxy"
	I1119 22:36:18.778593       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1119 22:36:18.913376       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:36:18.915584       1 server_others.go:152] "Using iptables Proxier"
	I1119 22:36:18.915624       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1119 22:36:18.915633       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1119 22:36:18.915677       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1119 22:36:18.915959       1 server.go:846] "Version info" version="v1.28.0"
	I1119 22:36:18.915974       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:36:18.920244       1 config.go:188] "Starting service config controller"
	I1119 22:36:18.920284       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1119 22:36:18.920312       1 config.go:97] "Starting endpoint slice config controller"
	I1119 22:36:18.920331       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1119 22:36:18.920900       1 config.go:315] "Starting node config controller"
	I1119 22:36:18.920980       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1119 22:36:19.021747       1 shared_informer.go:318] Caches are synced for node config
	I1119 22:36:19.021777       1 shared_informer.go:318] Caches are synced for service config
	I1119 22:36:19.021803       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0aa1bd28b60733799ab92c2d108b32fc31d28ba32f45f38e766395ec615ed220] <==
	W1119 22:36:02.058403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1119 22:36:02.058916       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1119 22:36:02.058451       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1119 22:36:02.058979       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1119 22:36:02.058497       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1119 22:36:02.059039       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1119 22:36:02.058567       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1119 22:36:02.059110       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1119 22:36:02.058600       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1119 22:36:02.059179       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1119 22:36:02.058632       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1119 22:36:02.059241       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1119 22:36:02.878544       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1119 22:36:02.878578       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 22:36:02.913574       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1119 22:36:02.913618       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1119 22:36:02.963123       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1119 22:36:02.963158       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1119 22:36:03.017826       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1119 22:36:03.018067       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1119 22:36:03.127020       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1119 22:36:03.127294       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1119 22:36:03.201758       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1119 22:36:03.202031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1119 22:36:05.139254       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 19 22:36:16 old-k8s-version-264160 kubelet[1553]: I1119 22:36:16.880132    1553 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 22:36:17 old-k8s-version-264160 kubelet[1553]: I1119 22:36:17.764724    1553 topology_manager.go:215] "Topology Admit Handler" podUID="2f9f6fbb-c725-49fd-ba3a-c84a7640aac2" podNamespace="kube-system" podName="kindnet-m9nqq"
	Nov 19 22:36:17 old-k8s-version-264160 kubelet[1553]: I1119 22:36:17.781987    1553 topology_manager.go:215] "Topology Admit Handler" podUID="3ee1645f-fba5-4206-bb83-70d298a4c5ac" podNamespace="kube-system" podName="kube-proxy-zzmnr"
	Nov 19 22:36:17 old-k8s-version-264160 kubelet[1553]: I1119 22:36:17.828089    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f9f6fbb-c725-49fd-ba3a-c84a7640aac2-xtables-lock\") pod \"kindnet-m9nqq\" (UID: \"2f9f6fbb-c725-49fd-ba3a-c84a7640aac2\") " pod="kube-system/kindnet-m9nqq"
	Nov 19 22:36:17 old-k8s-version-264160 kubelet[1553]: I1119 22:36:17.828147    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3ee1645f-fba5-4206-bb83-70d298a4c5ac-kube-proxy\") pod \"kube-proxy-zzmnr\" (UID: \"3ee1645f-fba5-4206-bb83-70d298a4c5ac\") " pod="kube-system/kube-proxy-zzmnr"
	Nov 19 22:36:17 old-k8s-version-264160 kubelet[1553]: I1119 22:36:17.828177    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvk49\" (UniqueName: \"kubernetes.io/projected/2f9f6fbb-c725-49fd-ba3a-c84a7640aac2-kube-api-access-kvk49\") pod \"kindnet-m9nqq\" (UID: \"2f9f6fbb-c725-49fd-ba3a-c84a7640aac2\") " pod="kube-system/kindnet-m9nqq"
	Nov 19 22:36:17 old-k8s-version-264160 kubelet[1553]: I1119 22:36:17.828200    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ee1645f-fba5-4206-bb83-70d298a4c5ac-xtables-lock\") pod \"kube-proxy-zzmnr\" (UID: \"3ee1645f-fba5-4206-bb83-70d298a4c5ac\") " pod="kube-system/kube-proxy-zzmnr"
	Nov 19 22:36:17 old-k8s-version-264160 kubelet[1553]: I1119 22:36:17.828223    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ee1645f-fba5-4206-bb83-70d298a4c5ac-lib-modules\") pod \"kube-proxy-zzmnr\" (UID: \"3ee1645f-fba5-4206-bb83-70d298a4c5ac\") " pod="kube-system/kube-proxy-zzmnr"
	Nov 19 22:36:17 old-k8s-version-264160 kubelet[1553]: I1119 22:36:17.828251    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f9f6fbb-c725-49fd-ba3a-c84a7640aac2-lib-modules\") pod \"kindnet-m9nqq\" (UID: \"2f9f6fbb-c725-49fd-ba3a-c84a7640aac2\") " pod="kube-system/kindnet-m9nqq"
	Nov 19 22:36:17 old-k8s-version-264160 kubelet[1553]: I1119 22:36:17.828274    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2f9f6fbb-c725-49fd-ba3a-c84a7640aac2-cni-cfg\") pod \"kindnet-m9nqq\" (UID: \"2f9f6fbb-c725-49fd-ba3a-c84a7640aac2\") " pod="kube-system/kindnet-m9nqq"
	Nov 19 22:36:17 old-k8s-version-264160 kubelet[1553]: I1119 22:36:17.828297    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc7w4\" (UniqueName: \"kubernetes.io/projected/3ee1645f-fba5-4206-bb83-70d298a4c5ac-kube-api-access-fc7w4\") pod \"kube-proxy-zzmnr\" (UID: \"3ee1645f-fba5-4206-bb83-70d298a4c5ac\") " pod="kube-system/kube-proxy-zzmnr"
	Nov 19 22:36:20 old-k8s-version-264160 kubelet[1553]: I1119 22:36:20.875429    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-m9nqq" podStartSLOduration=1.9015295540000001 podCreationTimestamp="2025-11-19 22:36:17 +0000 UTC" firstStartedPulling="2025-11-19 22:36:18.551512561 +0000 UTC m=+13.118052946" lastFinishedPulling="2025-11-19 22:36:20.525369265 +0000 UTC m=+15.091909650" observedRunningTime="2025-11-19 22:36:20.875315381 +0000 UTC m=+15.441855783" watchObservedRunningTime="2025-11-19 22:36:20.875386258 +0000 UTC m=+15.441926643"
	Nov 19 22:36:20 old-k8s-version-264160 kubelet[1553]: I1119 22:36:20.876203    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zzmnr" podStartSLOduration=3.87615718 podCreationTimestamp="2025-11-19 22:36:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:36:18.872222396 +0000 UTC m=+13.438762780" watchObservedRunningTime="2025-11-19 22:36:20.87615718 +0000 UTC m=+15.442697581"
	Nov 19 22:36:31 old-k8s-version-264160 kubelet[1553]: I1119 22:36:31.092782    1553 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 19 22:36:31 old-k8s-version-264160 kubelet[1553]: I1119 22:36:31.139366    1553 topology_manager.go:215] "Topology Admit Handler" podUID="7e7645ad-49a9-4f0c-89cc-128538e4d95c" podNamespace="kube-system" podName="coredns-5dd5756b68-vz7zx"
	Nov 19 22:36:31 old-k8s-version-264160 kubelet[1553]: I1119 22:36:31.152446    1553 topology_manager.go:215] "Topology Admit Handler" podUID="8e2dda77-5a6d-4796-926b-5a06158f8cdf" podNamespace="kube-system" podName="storage-provisioner"
	Nov 19 22:36:31 old-k8s-version-264160 kubelet[1553]: I1119 22:36:31.233967    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e7645ad-49a9-4f0c-89cc-128538e4d95c-config-volume\") pod \"coredns-5dd5756b68-vz7zx\" (UID: \"7e7645ad-49a9-4f0c-89cc-128538e4d95c\") " pod="kube-system/coredns-5dd5756b68-vz7zx"
	Nov 19 22:36:31 old-k8s-version-264160 kubelet[1553]: I1119 22:36:31.234065    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkc9q\" (UniqueName: \"kubernetes.io/projected/7e7645ad-49a9-4f0c-89cc-128538e4d95c-kube-api-access-pkc9q\") pod \"coredns-5dd5756b68-vz7zx\" (UID: \"7e7645ad-49a9-4f0c-89cc-128538e4d95c\") " pod="kube-system/coredns-5dd5756b68-vz7zx"
	Nov 19 22:36:31 old-k8s-version-264160 kubelet[1553]: I1119 22:36:31.234125    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8e2dda77-5a6d-4796-926b-5a06158f8cdf-tmp\") pod \"storage-provisioner\" (UID: \"8e2dda77-5a6d-4796-926b-5a06158f8cdf\") " pod="kube-system/storage-provisioner"
	Nov 19 22:36:31 old-k8s-version-264160 kubelet[1553]: I1119 22:36:31.234229    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dt4z\" (UniqueName: \"kubernetes.io/projected/8e2dda77-5a6d-4796-926b-5a06158f8cdf-kube-api-access-4dt4z\") pod \"storage-provisioner\" (UID: \"8e2dda77-5a6d-4796-926b-5a06158f8cdf\") " pod="kube-system/storage-provisioner"
	Nov 19 22:36:31 old-k8s-version-264160 kubelet[1553]: I1119 22:36:31.928942    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vz7zx" podStartSLOduration=14.928898879 podCreationTimestamp="2025-11-19 22:36:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:36:31.902288078 +0000 UTC m=+26.468828471" watchObservedRunningTime="2025-11-19 22:36:31.928898879 +0000 UTC m=+26.495439272"
	Nov 19 22:36:31 old-k8s-version-264160 kubelet[1553]: I1119 22:36:31.929197    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.929173877 podCreationTimestamp="2025-11-19 22:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:36:31.926923668 +0000 UTC m=+26.493464217" watchObservedRunningTime="2025-11-19 22:36:31.929173877 +0000 UTC m=+26.495714286"
	Nov 19 22:36:43 old-k8s-version-264160 kubelet[1553]: I1119 22:36:43.392110    1553 topology_manager.go:215] "Topology Admit Handler" podUID="2af6deb4-937f-4b9b-9de6-995e75a080b8" podNamespace="default" podName="busybox"
	Nov 19 22:36:43 old-k8s-version-264160 kubelet[1553]: I1119 22:36:43.523830    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb7ph\" (UniqueName: \"kubernetes.io/projected/2af6deb4-937f-4b9b-9de6-995e75a080b8-kube-api-access-kb7ph\") pod \"busybox\" (UID: \"2af6deb4-937f-4b9b-9de6-995e75a080b8\") " pod="default/busybox"
	Nov 19 22:36:46 old-k8s-version-264160 kubelet[1553]: I1119 22:36:46.935525    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.727293354 podCreationTimestamp="2025-11-19 22:36:43 +0000 UTC" firstStartedPulling="2025-11-19 22:36:43.851422103 +0000 UTC m=+38.417962488" lastFinishedPulling="2025-11-19 22:36:46.059604676 +0000 UTC m=+40.626145060" observedRunningTime="2025-11-19 22:36:46.934118134 +0000 UTC m=+41.500658519" watchObservedRunningTime="2025-11-19 22:36:46.935475926 +0000 UTC m=+41.502016319"
	
	
	==> storage-provisioner [f62b743b6725ec9ff1e91e664da6c9ce15d837afbab3608cc02fec3c9bd3d929] <==
	I1119 22:36:31.737660       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:36:31.757257       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:36:31.757310       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1119 22:36:31.769006       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:36:31.771663       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-264160_88781c45-d0c6-484e-abf4-8c2df680f8d6!
	I1119 22:36:31.772385       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62b15298-f39b-43d5-9d35-ddeafad4bd4d", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-264160_88781c45-d0c6-484e-abf4-8c2df680f8d6 became leader
	I1119 22:36:31.872085       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-264160_88781c45-d0c6-484e-abf4-8c2df680f8d6!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-264160 -n old-k8s-version-264160
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-264160 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-264160
helpers_test.go:243: (dbg) docker inspect old-k8s-version-264160:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49717cdd4541256c61f8dce96738708ef0a5263ed6216dabb995ea611616d37a",
	        "Created": "2025-11-19T22:35:36.829393211Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 205037,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:35:36.889026709Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/49717cdd4541256c61f8dce96738708ef0a5263ed6216dabb995ea611616d37a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49717cdd4541256c61f8dce96738708ef0a5263ed6216dabb995ea611616d37a/hostname",
	        "HostsPath": "/var/lib/docker/containers/49717cdd4541256c61f8dce96738708ef0a5263ed6216dabb995ea611616d37a/hosts",
	        "LogPath": "/var/lib/docker/containers/49717cdd4541256c61f8dce96738708ef0a5263ed6216dabb995ea611616d37a/49717cdd4541256c61f8dce96738708ef0a5263ed6216dabb995ea611616d37a-json.log",
	        "Name": "/old-k8s-version-264160",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-264160:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-264160",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49717cdd4541256c61f8dce96738708ef0a5263ed6216dabb995ea611616d37a",
	                "LowerDir": "/var/lib/docker/overlay2/feff7a4e723e18389dcb4a6f7e089bff4aeb566c5b553ed60b078e825f1fd0a8-init/diff:/var/lib/docker/overlay2/b6ebc9601ea0ae08484f263713f3358dd93f7748ebfafbd9155229908dee9606/diff",
	                "MergedDir": "/var/lib/docker/overlay2/feff7a4e723e18389dcb4a6f7e089bff4aeb566c5b553ed60b078e825f1fd0a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/feff7a4e723e18389dcb4a6f7e089bff4aeb566c5b553ed60b078e825f1fd0a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/feff7a4e723e18389dcb4a6f7e089bff4aeb566c5b553ed60b078e825f1fd0a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-264160",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-264160/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-264160",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-264160",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-264160",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c6d7c0f5ea4187c0bdb74e6f6190f3c956a222d61984cbd94ed19e45025d4c9",
	            "SandboxKey": "/var/run/docker/netns/1c6d7c0f5ea4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-264160": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:a5:ad:7a:8b:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b720c74a0dc38658463082bcb93730b420d57f391d495ecb21d74f5ad35b4f21",
	                    "EndpointID": "4800aba7ded95ed95a56ef1ad4bf1b238d330afe47c91b66c43c80a2794b655c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-264160",
	                        "49717cdd4541"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-264160 -n old-k8s-version-264160
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-264160 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-264160 logs -n 25: (1.255192388s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-156590 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                   │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo docker system info                                                                                                                                                                                                            │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo containerd config dump                                                                                                                                                                                                        │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo crio config                                                                                                                                                                                                                   │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ delete  │ -p cilium-156590                                                                                                                                                                                                                                    │ cilium-156590            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p cert-expiration-750367 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-750367   │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ ssh     │ force-systemd-env-388402 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-388402 │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ delete  │ -p force-systemd-env-388402                                                                                                                                                                                                                         │ force-systemd-env-388402 │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p cert-options-815306 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-815306      │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ cert-options-815306 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-815306      │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p cert-options-815306 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-815306      │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ delete  │ -p cert-options-815306                                                                                                                                                                                                                              │ cert-options-815306      │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ start   │ -p old-k8s-version-264160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-264160   │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:36 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:35:30
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:35:30.257107  204649 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:35:30.257270  204649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:35:30.257288  204649 out.go:374] Setting ErrFile to fd 2...
	I1119 22:35:30.257293  204649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:35:30.257586  204649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
	I1119 22:35:30.258032  204649 out.go:368] Setting JSON to false
	I1119 22:35:30.259057  204649 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4651,"bootTime":1763587079,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1119 22:35:30.259135  204649 start.go:143] virtualization:  
	I1119 22:35:30.265034  204649 out.go:179] * [old-k8s-version-264160] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:35:30.268600  204649 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:35:30.268654  204649 notify.go:221] Checking for updates...
	I1119 22:35:30.275244  204649 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:35:30.278424  204649 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:35:30.281805  204649 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube
	I1119 22:35:30.285044  204649 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:35:30.288125  204649 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:35:30.291809  204649 config.go:182] Loaded profile config "cert-expiration-750367": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:35:30.291938  204649 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:35:30.328984  204649 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:35:30.329118  204649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:35:30.391514  204649 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 22:35:30.382377652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:35:30.391618  204649 docker.go:319] overlay module found
	I1119 22:35:30.394904  204649 out.go:179] * Using the docker driver based on user configuration
	I1119 22:35:30.397906  204649 start.go:309] selected driver: docker
	I1119 22:35:30.397928  204649 start.go:930] validating driver "docker" against <nil>
	I1119 22:35:30.397942  204649 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:35:30.398744  204649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:35:30.457338  204649 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 22:35:30.447544183 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:35:30.457505  204649 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:35:30.457734  204649 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:35:30.460603  204649 out.go:179] * Using Docker driver with root privileges
	I1119 22:35:30.463555  204649 cni.go:84] Creating CNI manager for ""
	I1119 22:35:30.463623  204649 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:35:30.463636  204649 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:35:30.463716  204649 start.go:353] cluster config:
	{Name:old-k8s-version-264160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-264160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:35:30.466849  204649 out.go:179] * Starting "old-k8s-version-264160" primary control-plane node in "old-k8s-version-264160" cluster
	I1119 22:35:30.469744  204649 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 22:35:30.472743  204649 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:35:30.475730  204649 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1119 22:35:30.475797  204649 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1119 22:35:30.475812  204649 cache.go:65] Caching tarball of preloaded images
	I1119 22:35:30.475815  204649 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:35:30.475897  204649 preload.go:238] Found /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1119 22:35:30.475907  204649 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1119 22:35:30.476103  204649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/config.json ...
	I1119 22:35:30.476142  204649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/config.json: {Name:mka3956cf816ce3f0dc4b41766ded046d7e239b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:30.495142  204649 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:35:30.495164  204649 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:35:30.495178  204649 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:35:30.495202  204649 start.go:360] acquireMachinesLock for old-k8s-version-264160: {Name:mkb1d6d80392c055072776fe42d903323b85b557 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:35:30.495313  204649 start.go:364] duration metric: took 84.916µs to acquireMachinesLock for "old-k8s-version-264160"
	I1119 22:35:30.495346  204649 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-264160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-264160 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:35:30.495417  204649 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:35:30.498755  204649 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:35:30.499000  204649 start.go:159] libmachine.API.Create for "old-k8s-version-264160" (driver="docker")
	I1119 22:35:30.499040  204649 client.go:173] LocalClient.Create starting
	I1119 22:35:30.499112  204649 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem
	I1119 22:35:30.499148  204649 main.go:143] libmachine: Decoding PEM data...
	I1119 22:35:30.499166  204649 main.go:143] libmachine: Parsing certificate...
	I1119 22:35:30.499221  204649 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem
	I1119 22:35:30.499243  204649 main.go:143] libmachine: Decoding PEM data...
	I1119 22:35:30.499252  204649 main.go:143] libmachine: Parsing certificate...
	I1119 22:35:30.499620  204649 cli_runner.go:164] Run: docker network inspect old-k8s-version-264160 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:35:30.514882  204649 cli_runner.go:211] docker network inspect old-k8s-version-264160 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:35:30.514967  204649 network_create.go:284] running [docker network inspect old-k8s-version-264160] to gather additional debugging logs...
	I1119 22:35:30.514989  204649 cli_runner.go:164] Run: docker network inspect old-k8s-version-264160
	W1119 22:35:30.529792  204649 cli_runner.go:211] docker network inspect old-k8s-version-264160 returned with exit code 1
	I1119 22:35:30.529827  204649 network_create.go:287] error running [docker network inspect old-k8s-version-264160]: docker network inspect old-k8s-version-264160: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-264160 not found
	I1119 22:35:30.529841  204649 network_create.go:289] output of [docker network inspect old-k8s-version-264160]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-264160 not found
	
	** /stderr **
	I1119 22:35:30.529955  204649 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:35:30.546966  204649 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b0fa93c84379 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:8f:4f:8f:5a:a3} reservation:<nil>}
	I1119 22:35:30.547286  204649 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-141c656f658f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:30:08:ea:1a:b9} reservation:<nil>}
	I1119 22:35:30.547626  204649 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-aae633a5ffae IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:73:d8:2e:30:94} reservation:<nil>}
	I1119 22:35:30.548050  204649 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019f9110}
	I1119 22:35:30.548074  204649 network_create.go:124] attempt to create docker network old-k8s-version-264160 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1119 22:35:30.548135  204649 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-264160 old-k8s-version-264160
	I1119 22:35:30.612059  204649 network_create.go:108] docker network old-k8s-version-264160 192.168.76.0/24 created
	I1119 22:35:30.612094  204649 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-264160" container
	I1119 22:35:30.612164  204649 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:35:30.629392  204649 cli_runner.go:164] Run: docker volume create old-k8s-version-264160 --label name.minikube.sigs.k8s.io=old-k8s-version-264160 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:35:30.648884  204649 oci.go:103] Successfully created a docker volume old-k8s-version-264160
	I1119 22:35:30.648982  204649 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-264160-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-264160 --entrypoint /usr/bin/test -v old-k8s-version-264160:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:35:31.199519  204649 oci.go:107] Successfully prepared a docker volume old-k8s-version-264160
	I1119 22:35:31.199605  204649 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1119 22:35:31.199622  204649 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 22:35:31.199697  204649 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-264160:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 22:35:36.761404  204649 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-264160:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (5.561655508s)
	I1119 22:35:36.761444  204649 kic.go:203] duration metric: took 5.561818243s to extract preloaded images to volume ...
	W1119 22:35:36.761577  204649 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 22:35:36.761693  204649 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:35:36.815053  204649 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-264160 --name old-k8s-version-264160 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-264160 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-264160 --network old-k8s-version-264160 --ip 192.168.76.2 --volume old-k8s-version-264160:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:35:37.145087  204649 cli_runner.go:164] Run: docker container inspect old-k8s-version-264160 --format={{.State.Running}}
	I1119 22:35:37.171282  204649 cli_runner.go:164] Run: docker container inspect old-k8s-version-264160 --format={{.State.Status}}
	I1119 22:35:37.199972  204649 cli_runner.go:164] Run: docker exec old-k8s-version-264160 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:35:37.254683  204649 oci.go:144] the created container "old-k8s-version-264160" has a running status.
	I1119 22:35:37.254726  204649 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/old-k8s-version-264160/id_rsa...
	I1119 22:35:38.063600  204649 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-2347/.minikube/machines/old-k8s-version-264160/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:35:38.084666  204649 cli_runner.go:164] Run: docker container inspect old-k8s-version-264160 --format={{.State.Status}}
	I1119 22:35:38.103756  204649 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:35:38.103781  204649 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-264160 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:35:38.159199  204649 cli_runner.go:164] Run: docker container inspect old-k8s-version-264160 --format={{.State.Status}}
	I1119 22:35:38.177494  204649 machine.go:94] provisionDockerMachine start ...
	I1119 22:35:38.177599  204649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264160
	I1119 22:35:38.195122  204649 main.go:143] libmachine: Using SSH client type: native
	I1119 22:35:38.195453  204649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1119 22:35:38.195469  204649 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:35:38.196184  204649 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 22:35:41.337849  204649 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-264160
	
	I1119 22:35:41.337872  204649 ubuntu.go:182] provisioning hostname "old-k8s-version-264160"
	I1119 22:35:41.337936  204649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264160
	I1119 22:35:41.356186  204649 main.go:143] libmachine: Using SSH client type: native
	I1119 22:35:41.356488  204649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1119 22:35:41.356501  204649 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-264160 && echo "old-k8s-version-264160" | sudo tee /etc/hostname
	I1119 22:35:41.512063  204649 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-264160
	
	I1119 22:35:41.512155  204649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264160
	I1119 22:35:41.531307  204649 main.go:143] libmachine: Using SSH client type: native
	I1119 22:35:41.531635  204649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1119 22:35:41.531659  204649 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-264160' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-264160/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-264160' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:35:41.674522  204649 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:35:41.674549  204649 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-2347/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-2347/.minikube}
	I1119 22:35:41.674570  204649 ubuntu.go:190] setting up certificates
	I1119 22:35:41.674581  204649 provision.go:84] configureAuth start
	I1119 22:35:41.674640  204649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-264160
	I1119 22:35:41.694614  204649 provision.go:143] copyHostCerts
	I1119 22:35:41.694682  204649 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem, removing ...
	I1119 22:35:41.694696  204649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem
	I1119 22:35:41.694778  204649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem (1675 bytes)
	I1119 22:35:41.694893  204649 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem, removing ...
	I1119 22:35:41.694904  204649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem
	I1119 22:35:41.694933  204649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem (1082 bytes)
	I1119 22:35:41.694994  204649 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem, removing ...
	I1119 22:35:41.695002  204649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem
	I1119 22:35:41.695027  204649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem (1123 bytes)
	I1119 22:35:41.695078  204649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-264160 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-264160]
	I1119 22:35:41.985138  204649 provision.go:177] copyRemoteCerts
	I1119 22:35:41.985210  204649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:35:41.985253  204649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264160
	I1119 22:35:42.011744  204649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/old-k8s-version-264160/id_rsa Username:docker}
	I1119 22:35:42.120462  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1119 22:35:42.153941  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:35:42.177275  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:35:42.199768  204649 provision.go:87] duration metric: took 525.161639ms to configureAuth
	I1119 22:35:42.199797  204649 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:35:42.199999  204649 config.go:182] Loaded profile config "old-k8s-version-264160": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 22:35:42.200014  204649 machine.go:97] duration metric: took 4.022496163s to provisionDockerMachine
	I1119 22:35:42.200022  204649 client.go:176] duration metric: took 11.700970491s to LocalClient.Create
	I1119 22:35:42.200036  204649 start.go:167] duration metric: took 11.70103788s to libmachine.API.Create "old-k8s-version-264160"
	I1119 22:35:42.200044  204649 start.go:293] postStartSetup for "old-k8s-version-264160" (driver="docker")
	I1119 22:35:42.200053  204649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:35:42.200107  204649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:35:42.200153  204649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264160
	I1119 22:35:42.221138  204649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/old-k8s-version-264160/id_rsa Username:docker}
	I1119 22:35:42.326805  204649 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:35:42.330396  204649 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:35:42.330426  204649 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:35:42.330439  204649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/addons for local assets ...
	I1119 22:35:42.330497  204649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/files for local assets ...
	I1119 22:35:42.330585  204649 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem -> 41442.pem in /etc/ssl/certs
	I1119 22:35:42.330694  204649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:35:42.338569  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:35:42.358341  204649 start.go:296] duration metric: took 158.281623ms for postStartSetup
	I1119 22:35:42.358732  204649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-264160
	I1119 22:35:42.376951  204649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/config.json ...
	I1119 22:35:42.377417  204649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:35:42.377467  204649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264160
	I1119 22:35:42.395134  204649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/old-k8s-version-264160/id_rsa Username:docker}
	I1119 22:35:42.495341  204649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:35:42.499972  204649 start.go:128] duration metric: took 12.004539402s to createHost
	I1119 22:35:42.500036  204649 start.go:83] releasing machines lock for "old-k8s-version-264160", held for 12.004707247s
	I1119 22:35:42.500112  204649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-264160
	I1119 22:35:42.517291  204649 ssh_runner.go:195] Run: cat /version.json
	I1119 22:35:42.517425  204649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264160
	I1119 22:35:42.517727  204649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:35:42.517817  204649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264160
	I1119 22:35:42.538882  204649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/old-k8s-version-264160/id_rsa Username:docker}
	I1119 22:35:42.547918  204649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/old-k8s-version-264160/id_rsa Username:docker}
	I1119 22:35:42.646164  204649 ssh_runner.go:195] Run: systemctl --version
	I1119 22:35:42.733875  204649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:35:42.738275  204649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:35:42.738377  204649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:35:42.768357  204649 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 22:35:42.768382  204649 start.go:496] detecting cgroup driver to use...
	I1119 22:35:42.768416  204649 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 22:35:42.768467  204649 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:35:42.786112  204649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:35:42.799389  204649 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:35:42.799458  204649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:35:42.817550  204649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:35:42.837250  204649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:35:42.954428  204649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:35:43.089677  204649 docker.go:234] disabling docker service ...
	I1119 22:35:43.089796  204649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:35:43.119196  204649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:35:43.133883  204649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:35:43.271748  204649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:35:43.403111  204649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:35:43.416605  204649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:35:43.431762  204649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1119 22:35:43.441044  204649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:35:43.450280  204649 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1119 22:35:43.450355  204649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1119 22:35:43.460541  204649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:35:43.469380  204649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:35:43.478023  204649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:35:43.486801  204649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:35:43.495927  204649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:35:43.505431  204649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:35:43.514750  204649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:35:43.524906  204649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:35:43.533562  204649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:35:43.541294  204649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:35:43.666061  204649 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:35:43.801836  204649 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:35:43.801996  204649 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:35:43.807154  204649 start.go:564] Will wait 60s for crictl version
	I1119 22:35:43.807283  204649 ssh_runner.go:195] Run: which crictl
	I1119 22:35:43.810929  204649 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:35:43.840804  204649 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:35:43.840924  204649 ssh_runner.go:195] Run: containerd --version
	I1119 22:35:43.863403  204649 ssh_runner.go:195] Run: containerd --version
	I1119 22:35:43.892718  204649 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1119 22:35:43.895641  204649 cli_runner.go:164] Run: docker network inspect old-k8s-version-264160 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:35:43.912965  204649 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 22:35:43.916790  204649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:35:43.926772  204649 kubeadm.go:884] updating cluster {Name:old-k8s-version-264160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-264160 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:35:43.926887  204649 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1119 22:35:43.926949  204649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:35:43.959370  204649 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:35:43.959391  204649 containerd.go:534] Images already preloaded, skipping extraction
	I1119 22:35:43.959451  204649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:35:43.989251  204649 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:35:43.989276  204649 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:35:43.989284  204649 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 containerd true true} ...
	I1119 22:35:43.989377  204649 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-264160 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-264160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:35:43.989454  204649 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:35:44.018509  204649 cni.go:84] Creating CNI manager for ""
	I1119 22:35:44.018532  204649 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:35:44.018554  204649 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:35:44.018590  204649 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-264160 NodeName:old-k8s-version-264160 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:35:44.018720  204649 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-264160"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:35:44.018791  204649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1119 22:35:44.027774  204649 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:35:44.027843  204649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:35:44.035977  204649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1119 22:35:44.049828  204649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:35:44.063834  204649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1119 22:35:44.078459  204649 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:35:44.082544  204649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:35:44.093549  204649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:35:44.218127  204649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:35:44.238847  204649 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160 for IP: 192.168.76.2
	I1119 22:35:44.238867  204649 certs.go:195] generating shared ca certs ...
	I1119 22:35:44.238885  204649 certs.go:227] acquiring lock for ca certs: {Name:mk76285c445bf14c1e73dedba3201c9181209ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:44.239062  204649 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key
	I1119 22:35:44.239112  204649 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key
	I1119 22:35:44.239124  204649 certs.go:257] generating profile certs ...
	I1119 22:35:44.239186  204649 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.key
	I1119 22:35:44.239203  204649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt with IP's: []
	I1119 22:35:44.811737  204649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt ...
	I1119 22:35:44.811764  204649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt: {Name:mk14e11ecda6c7214508a5ade0f9ee915e780f3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:44.811951  204649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.key ...
	I1119 22:35:44.811960  204649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.key: {Name:mk0adfc8036cdd3c163e4cffd5e262cb5308dfe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:44.812038  204649 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.key.955d0b5b
	I1119 22:35:44.812063  204649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.crt.955d0b5b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1119 22:35:45.101024  204649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.crt.955d0b5b ...
	I1119 22:35:45.101056  204649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.crt.955d0b5b: {Name:mk5142ac1d579327ae160e83fc7f68b0f3557595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:45.101255  204649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.key.955d0b5b ...
	I1119 22:35:45.101267  204649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.key.955d0b5b: {Name:mkc12bee6747eface51cd5e77da3f942ad5e5618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:45.101361  204649 certs.go:382] copying /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.crt.955d0b5b -> /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.crt
	I1119 22:35:45.101462  204649 certs.go:386] copying /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.key.955d0b5b -> /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.key
	I1119 22:35:45.101522  204649 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/proxy-client.key
	I1119 22:35:45.101539  204649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/proxy-client.crt with IP's: []
	I1119 22:35:45.832941  204649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/proxy-client.crt ...
	I1119 22:35:45.832971  204649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/proxy-client.crt: {Name:mk306cbc09a8a4cdf49bd23a7f735885d2e6d6d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:45.833166  204649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/proxy-client.key ...
	I1119 22:35:45.833185  204649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/proxy-client.key: {Name:mk51455941ef13941a00f8719c0c4a50b2eaa3aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:45.833395  204649 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem (1338 bytes)
	W1119 22:35:45.833433  204649 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144_empty.pem, impossibly tiny 0 bytes
	I1119 22:35:45.833442  204649 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:35:45.833468  204649 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:35:45.833497  204649 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:35:45.833529  204649 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem (1675 bytes)
	I1119 22:35:45.833577  204649 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:35:45.834165  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:35:45.856349  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1119 22:35:45.877913  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:35:45.896516  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:35:45.914586  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1119 22:35:45.933361  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:35:45.951038  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:35:45.973047  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:35:45.994027  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:35:46.025730  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem --> /usr/share/ca-certificates/4144.pem (1338 bytes)
	I1119 22:35:46.045750  204649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /usr/share/ca-certificates/41442.pem (1708 bytes)
	I1119 22:35:46.073629  204649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:35:46.087614  204649 ssh_runner.go:195] Run: openssl version
	I1119 22:35:46.094872  204649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:35:46.103931  204649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:35:46.108400  204649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:35:46.108519  204649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:35:46.165543  204649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:35:46.174470  204649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4144.pem && ln -fs /usr/share/ca-certificates/4144.pem /etc/ssl/certs/4144.pem"
	I1119 22:35:46.182680  204649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4144.pem
	I1119 22:35:46.186577  204649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/4144.pem
	I1119 22:35:46.186637  204649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4144.pem
	I1119 22:35:46.228043  204649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4144.pem /etc/ssl/certs/51391683.0"
	I1119 22:35:46.236269  204649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41442.pem && ln -fs /usr/share/ca-certificates/41442.pem /etc/ssl/certs/41442.pem"
	I1119 22:35:46.244687  204649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41442.pem
	I1119 22:35:46.248576  204649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/41442.pem
	I1119 22:35:46.248696  204649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41442.pem
	I1119 22:35:46.290804  204649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41442.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:35:46.299091  204649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:35:46.302689  204649 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:35:46.302790  204649 kubeadm.go:401] StartCluster: {Name:old-k8s-version-264160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-264160 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:35:46.302872  204649 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:35:46.302930  204649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:35:46.341874  204649 cri.go:89] found id: ""
	I1119 22:35:46.341955  204649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:35:46.349861  204649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:35:46.358624  204649 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:35:46.358700  204649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:35:46.366859  204649 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:35:46.366882  204649 kubeadm.go:158] found existing configuration files:
	
	I1119 22:35:46.366956  204649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 22:35:46.375053  204649 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:35:46.375118  204649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:35:46.382569  204649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 22:35:46.390549  204649 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:35:46.390660  204649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:35:46.398378  204649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 22:35:46.406002  204649 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:35:46.406127  204649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:35:46.414558  204649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 22:35:46.422462  204649 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:35:46.422528  204649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:35:46.430234  204649 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:35:46.480821  204649 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1119 22:35:46.480973  204649 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:35:46.518306  204649 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:35:46.518408  204649 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 22:35:46.518469  204649 kubeadm.go:319] OS: Linux
	I1119 22:35:46.518555  204649 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:35:46.518627  204649 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 22:35:46.518704  204649 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:35:46.518775  204649 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:35:46.518848  204649 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:35:46.518928  204649 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:35:46.518993  204649 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:35:46.519065  204649 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:35:46.519136  204649 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 22:35:46.603387  204649 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:35:46.603532  204649 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:35:46.603659  204649 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1119 22:35:46.748614  204649 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:35:46.754520  204649 out.go:252]   - Generating certificates and keys ...
	I1119 22:35:46.754636  204649 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:35:46.754728  204649 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:35:47.362621  204649 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:35:47.861152  204649 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:35:48.578567  204649 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:35:48.709308  204649 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:35:49.572586  204649 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:35:49.572742  204649 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-264160] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:35:50.286968  204649 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:35:50.287350  204649 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-264160] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:35:50.729163  204649 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:35:51.087355  204649 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:35:51.301494  204649 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:35:51.301799  204649 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:35:52.439151  204649 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:35:52.767854  204649 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:35:53.170174  204649 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:35:53.873745  204649 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:35:53.874592  204649 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:35:53.877867  204649 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:35:53.883494  204649 out.go:252]   - Booting up control plane ...
	I1119 22:35:53.883605  204649 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:35:53.883687  204649 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:35:53.883756  204649 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:35:53.900950  204649 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:35:53.901278  204649 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:35:53.901523  204649 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:35:54.050697  204649 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1119 22:36:04.052724  204649 kubeadm.go:319] [apiclient] All control plane components are healthy after 10.003761 seconds
	I1119 22:36:04.052869  204649 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:36:04.072130  204649 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:36:04.605781  204649 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:36:04.606002  204649 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-264160 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:36:05.122165  204649 kubeadm.go:319] [bootstrap-token] Using token: t3hgjm.t27pk8uf8r4mqrko
	I1119 22:36:05.125207  204649 out.go:252]   - Configuring RBAC rules ...
	I1119 22:36:05.125347  204649 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:36:05.138372  204649 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:36:05.149292  204649 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:36:05.153962  204649 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:36:05.159111  204649 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:36:05.163924  204649 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:36:05.183969  204649 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:36:05.490668  204649 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:36:05.544743  204649 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:36:05.545712  204649 kubeadm.go:319] 
	I1119 22:36:05.545794  204649 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:36:05.545800  204649 kubeadm.go:319] 
	I1119 22:36:05.545881  204649 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:36:05.545886  204649 kubeadm.go:319] 
	I1119 22:36:05.545912  204649 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:36:05.545975  204649 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:36:05.546029  204649 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:36:05.546036  204649 kubeadm.go:319] 
	I1119 22:36:05.546092  204649 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:36:05.546097  204649 kubeadm.go:319] 
	I1119 22:36:05.546192  204649 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:36:05.546198  204649 kubeadm.go:319] 
	I1119 22:36:05.546252  204649 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:36:05.546330  204649 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:36:05.546401  204649 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:36:05.546405  204649 kubeadm.go:319] 
	I1119 22:36:05.546493  204649 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:36:05.546572  204649 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:36:05.546577  204649 kubeadm.go:319] 
	I1119 22:36:05.546665  204649 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token t3hgjm.t27pk8uf8r4mqrko \
	I1119 22:36:05.546773  204649 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f3dc8233c963d7fa33b7a72da6102de3e0dbc1bf6e99b77f8426922389e565f9 \
	I1119 22:36:05.546794  204649 kubeadm.go:319] 	--control-plane 
	I1119 22:36:05.546798  204649 kubeadm.go:319] 
	I1119 22:36:05.546886  204649 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:36:05.546890  204649 kubeadm.go:319] 
	I1119 22:36:05.546975  204649 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token t3hgjm.t27pk8uf8r4mqrko \
	I1119 22:36:05.547080  204649 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f3dc8233c963d7fa33b7a72da6102de3e0dbc1bf6e99b77f8426922389e565f9 
	I1119 22:36:05.551148  204649 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 22:36:05.551265  204649 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:36:05.551281  204649 cni.go:84] Creating CNI manager for ""
	I1119 22:36:05.551288  204649 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:36:05.554507  204649 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:36:05.557507  204649 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:36:05.576310  204649 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1119 22:36:05.576331  204649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:36:05.593718  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:36:06.658889  204649 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.065138821s)
	I1119 22:36:06.658975  204649 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:36:06.659094  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:06.659175  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-264160 minikube.k8s.io/updated_at=2025_11_19T22_36_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=old-k8s-version-264160 minikube.k8s.io/primary=true
	I1119 22:36:06.818009  204649 ops.go:34] apiserver oom_adj: -16
	I1119 22:36:06.818101  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:07.318669  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:07.818290  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:08.318653  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:08.818829  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:09.318705  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:09.818670  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:10.318656  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:10.818343  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:11.318742  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:11.818660  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:12.318643  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:12.818204  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:13.318233  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:13.818478  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:14.318102  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:14.818178  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:15.318224  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:15.818601  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:16.319007  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:16.818836  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:17.318883  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:17.818083  204649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:18.005461  204649 kubeadm.go:1114] duration metric: took 11.346407343s to wait for elevateKubeSystemPrivileges
	I1119 22:36:18.005498  204649 kubeadm.go:403] duration metric: took 31.702712181s to StartCluster
	I1119 22:36:18.005516  204649 settings.go:142] acquiring lock: {Name:mk5c8f7d46662d574c7e53cf7b09709855a1e14f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:36:18.005603  204649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:36:18.006647  204649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/kubeconfig: {Name:mk670f88d9cb1be22f05f7db4ddcfb97af791e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:36:18.006944  204649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:36:18.006951  204649 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:36:18.007274  204649 config.go:182] Loaded profile config "old-k8s-version-264160": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 22:36:18.007313  204649 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:36:18.007401  204649 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-264160"
	I1119 22:36:18.007419  204649 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-264160"
	I1119 22:36:18.007444  204649 host.go:66] Checking if "old-k8s-version-264160" exists ...
	I1119 22:36:18.007919  204649 cli_runner.go:164] Run: docker container inspect old-k8s-version-264160 --format={{.State.Status}}
	I1119 22:36:18.008446  204649 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-264160"
	I1119 22:36:18.008469  204649 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-264160"
	I1119 22:36:18.008780  204649 cli_runner.go:164] Run: docker container inspect old-k8s-version-264160 --format={{.State.Status}}
	I1119 22:36:18.011866  204649 out.go:179] * Verifying Kubernetes components...
	I1119 22:36:18.014838  204649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:36:18.055880  204649 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:36:18.056763  204649 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-264160"
	I1119 22:36:18.056800  204649 host.go:66] Checking if "old-k8s-version-264160" exists ...
	I1119 22:36:18.057242  204649 cli_runner.go:164] Run: docker container inspect old-k8s-version-264160 --format={{.State.Status}}
	I1119 22:36:18.059443  204649 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:36:18.059467  204649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:36:18.059527  204649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264160
	I1119 22:36:18.093613  204649 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:36:18.093726  204649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/old-k8s-version-264160/id_rsa Username:docker}
	I1119 22:36:18.095300  204649 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:36:18.095428  204649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264160
	I1119 22:36:18.135800  204649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/old-k8s-version-264160/id_rsa Username:docker}
	I1119 22:36:18.357324  204649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:36:18.357451  204649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:36:18.439741  204649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:36:18.443940  204649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:36:19.165631  204649 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-264160" to be "Ready" ...
	I1119 22:36:19.165952  204649 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 22:36:19.668262  204649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.228448448s)
	I1119 22:36:19.668305  204649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.224346607s)
	I1119 22:36:19.682930  204649 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-264160" context rescaled to 1 replicas
	I1119 22:36:19.691208  204649 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 22:36:19.694506  204649 addons.go:515] duration metric: took 1.687167131s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1119 22:36:21.170389  204649 node_ready.go:57] node "old-k8s-version-264160" has "Ready":"False" status (will retry)
	W1119 22:36:23.669181  204649 node_ready.go:57] node "old-k8s-version-264160" has "Ready":"False" status (will retry)
	W1119 22:36:26.169468  204649 node_ready.go:57] node "old-k8s-version-264160" has "Ready":"False" status (will retry)
	W1119 22:36:28.668771  204649 node_ready.go:57] node "old-k8s-version-264160" has "Ready":"False" status (will retry)
	W1119 22:36:30.669387  204649 node_ready.go:57] node "old-k8s-version-264160" has "Ready":"False" status (will retry)
	I1119 22:36:31.179436  204649 node_ready.go:49] node "old-k8s-version-264160" is "Ready"
	I1119 22:36:31.179462  204649 node_ready.go:38] duration metric: took 12.013798629s for node "old-k8s-version-264160" to be "Ready" ...
	I1119 22:36:31.179475  204649 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:36:31.179538  204649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:36:31.199071  204649 api_server.go:72] duration metric: took 13.192088991s to wait for apiserver process to appear ...
	I1119 22:36:31.199094  204649 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:36:31.199116  204649 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:36:31.209770  204649 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 22:36:31.211739  204649 api_server.go:141] control plane version: v1.28.0
	I1119 22:36:31.211767  204649 api_server.go:131] duration metric: took 12.666386ms to wait for apiserver health ...
	I1119 22:36:31.211777  204649 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:36:31.216012  204649 system_pods.go:59] 8 kube-system pods found
	I1119 22:36:31.216054  204649 system_pods.go:61] "coredns-5dd5756b68-vz7zx" [7e7645ad-49a9-4f0c-89cc-128538e4d95c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:36:31.216062  204649 system_pods.go:61] "etcd-old-k8s-version-264160" [1bd42d38-2921-483d-b656-d1f12178141b] Running
	I1119 22:36:31.216068  204649 system_pods.go:61] "kindnet-m9nqq" [2f9f6fbb-c725-49fd-ba3a-c84a7640aac2] Running
	I1119 22:36:31.216073  204649 system_pods.go:61] "kube-apiserver-old-k8s-version-264160" [454724a2-4fd6-4dc1-9cc1-a4b60944a9df] Running
	I1119 22:36:31.216084  204649 system_pods.go:61] "kube-controller-manager-old-k8s-version-264160" [a5ad5849-09a1-43bd-861a-8c92712b0a14] Running
	I1119 22:36:31.216088  204649 system_pods.go:61] "kube-proxy-zzmnr" [3ee1645f-fba5-4206-bb83-70d298a4c5ac] Running
	I1119 22:36:31.216100  204649 system_pods.go:61] "kube-scheduler-old-k8s-version-264160" [fbad20e1-7729-4503-b929-bc32986a00e8] Running
	I1119 22:36:31.216106  204649 system_pods.go:61] "storage-provisioner" [8e2dda77-5a6d-4796-926b-5a06158f8cdf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:36:31.216112  204649 system_pods.go:74] duration metric: took 4.329001ms to wait for pod list to return data ...
	I1119 22:36:31.216127  204649 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:36:31.219246  204649 default_sa.go:45] found service account: "default"
	I1119 22:36:31.219283  204649 default_sa.go:55] duration metric: took 3.150461ms for default service account to be created ...
	I1119 22:36:31.219293  204649 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:36:31.226730  204649 system_pods.go:86] 8 kube-system pods found
	I1119 22:36:31.226780  204649 system_pods.go:89] "coredns-5dd5756b68-vz7zx" [7e7645ad-49a9-4f0c-89cc-128538e4d95c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:36:31.226788  204649 system_pods.go:89] "etcd-old-k8s-version-264160" [1bd42d38-2921-483d-b656-d1f12178141b] Running
	I1119 22:36:31.226795  204649 system_pods.go:89] "kindnet-m9nqq" [2f9f6fbb-c725-49fd-ba3a-c84a7640aac2] Running
	I1119 22:36:31.226801  204649 system_pods.go:89] "kube-apiserver-old-k8s-version-264160" [454724a2-4fd6-4dc1-9cc1-a4b60944a9df] Running
	I1119 22:36:31.226820  204649 system_pods.go:89] "kube-controller-manager-old-k8s-version-264160" [a5ad5849-09a1-43bd-861a-8c92712b0a14] Running
	I1119 22:36:31.226840  204649 system_pods.go:89] "kube-proxy-zzmnr" [3ee1645f-fba5-4206-bb83-70d298a4c5ac] Running
	I1119 22:36:31.226854  204649 system_pods.go:89] "kube-scheduler-old-k8s-version-264160" [fbad20e1-7729-4503-b929-bc32986a00e8] Running
	I1119 22:36:31.226880  204649 system_pods.go:89] "storage-provisioner" [8e2dda77-5a6d-4796-926b-5a06158f8cdf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:36:31.226914  204649 retry.go:31] will retry after 302.789316ms: missing components: kube-dns
	I1119 22:36:31.534752  204649 system_pods.go:86] 8 kube-system pods found
	I1119 22:36:31.534798  204649 system_pods.go:89] "coredns-5dd5756b68-vz7zx" [7e7645ad-49a9-4f0c-89cc-128538e4d95c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:36:31.534805  204649 system_pods.go:89] "etcd-old-k8s-version-264160" [1bd42d38-2921-483d-b656-d1f12178141b] Running
	I1119 22:36:31.534811  204649 system_pods.go:89] "kindnet-m9nqq" [2f9f6fbb-c725-49fd-ba3a-c84a7640aac2] Running
	I1119 22:36:31.534815  204649 system_pods.go:89] "kube-apiserver-old-k8s-version-264160" [454724a2-4fd6-4dc1-9cc1-a4b60944a9df] Running
	I1119 22:36:31.534821  204649 system_pods.go:89] "kube-controller-manager-old-k8s-version-264160" [a5ad5849-09a1-43bd-861a-8c92712b0a14] Running
	I1119 22:36:31.534825  204649 system_pods.go:89] "kube-proxy-zzmnr" [3ee1645f-fba5-4206-bb83-70d298a4c5ac] Running
	I1119 22:36:31.534829  204649 system_pods.go:89] "kube-scheduler-old-k8s-version-264160" [fbad20e1-7729-4503-b929-bc32986a00e8] Running
	I1119 22:36:31.534838  204649 system_pods.go:89] "storage-provisioner" [8e2dda77-5a6d-4796-926b-5a06158f8cdf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:36:31.534852  204649 retry.go:31] will retry after 260.752212ms: missing components: kube-dns
	I1119 22:36:31.802433  204649 system_pods.go:86] 8 kube-system pods found
	I1119 22:36:31.802477  204649 system_pods.go:89] "coredns-5dd5756b68-vz7zx" [7e7645ad-49a9-4f0c-89cc-128538e4d95c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:36:31.802484  204649 system_pods.go:89] "etcd-old-k8s-version-264160" [1bd42d38-2921-483d-b656-d1f12178141b] Running
	I1119 22:36:31.802492  204649 system_pods.go:89] "kindnet-m9nqq" [2f9f6fbb-c725-49fd-ba3a-c84a7640aac2] Running
	I1119 22:36:31.802496  204649 system_pods.go:89] "kube-apiserver-old-k8s-version-264160" [454724a2-4fd6-4dc1-9cc1-a4b60944a9df] Running
	I1119 22:36:31.802502  204649 system_pods.go:89] "kube-controller-manager-old-k8s-version-264160" [a5ad5849-09a1-43bd-861a-8c92712b0a14] Running
	I1119 22:36:31.802506  204649 system_pods.go:89] "kube-proxy-zzmnr" [3ee1645f-fba5-4206-bb83-70d298a4c5ac] Running
	I1119 22:36:31.802510  204649 system_pods.go:89] "kube-scheduler-old-k8s-version-264160" [fbad20e1-7729-4503-b929-bc32986a00e8] Running
	I1119 22:36:31.802517  204649 system_pods.go:89] "storage-provisioner" [8e2dda77-5a6d-4796-926b-5a06158f8cdf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:36:31.802540  204649 retry.go:31] will retry after 341.00697ms: missing components: kube-dns
	I1119 22:36:32.148247  204649 system_pods.go:86] 8 kube-system pods found
	I1119 22:36:32.148281  204649 system_pods.go:89] "coredns-5dd5756b68-vz7zx" [7e7645ad-49a9-4f0c-89cc-128538e4d95c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:36:32.148298  204649 system_pods.go:89] "etcd-old-k8s-version-264160" [1bd42d38-2921-483d-b656-d1f12178141b] Running
	I1119 22:36:32.148304  204649 system_pods.go:89] "kindnet-m9nqq" [2f9f6fbb-c725-49fd-ba3a-c84a7640aac2] Running
	I1119 22:36:32.148309  204649 system_pods.go:89] "kube-apiserver-old-k8s-version-264160" [454724a2-4fd6-4dc1-9cc1-a4b60944a9df] Running
	I1119 22:36:32.148314  204649 system_pods.go:89] "kube-controller-manager-old-k8s-version-264160" [a5ad5849-09a1-43bd-861a-8c92712b0a14] Running
	I1119 22:36:32.148320  204649 system_pods.go:89] "kube-proxy-zzmnr" [3ee1645f-fba5-4206-bb83-70d298a4c5ac] Running
	I1119 22:36:32.148329  204649 system_pods.go:89] "kube-scheduler-old-k8s-version-264160" [fbad20e1-7729-4503-b929-bc32986a00e8] Running
	I1119 22:36:32.148333  204649 system_pods.go:89] "storage-provisioner" [8e2dda77-5a6d-4796-926b-5a06158f8cdf] Running
	I1119 22:36:32.148348  204649 system_pods.go:126] duration metric: took 929.047421ms to wait for k8s-apps to be running ...
	I1119 22:36:32.148356  204649 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:36:32.148423  204649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:36:32.175720  204649 system_svc.go:56] duration metric: took 27.353086ms WaitForService to wait for kubelet
	I1119 22:36:32.175754  204649 kubeadm.go:587] duration metric: took 14.168776732s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:36:32.175782  204649 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:36:32.178856  204649 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:36:32.178889  204649 node_conditions.go:123] node cpu capacity is 2
	I1119 22:36:32.178903  204649 node_conditions.go:105] duration metric: took 3.115367ms to run NodePressure ...
	I1119 22:36:32.178915  204649 start.go:242] waiting for startup goroutines ...
	I1119 22:36:32.178933  204649 start.go:247] waiting for cluster config update ...
	I1119 22:36:32.178949  204649 start.go:256] writing updated cluster config ...
	I1119 22:36:32.179275  204649 ssh_runner.go:195] Run: rm -f paused
	I1119 22:36:32.186678  204649 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:36:32.192039  204649 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vz7zx" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 22:36:34.198510  204649 pod_ready.go:104] pod "coredns-5dd5756b68-vz7zx" is not "Ready", error: <nil>
	W1119 22:36:36.198937  204649 pod_ready.go:104] pod "coredns-5dd5756b68-vz7zx" is not "Ready", error: <nil>
	W1119 22:36:38.698791  204649 pod_ready.go:104] pod "coredns-5dd5756b68-vz7zx" is not "Ready", error: <nil>
	W1119 22:36:41.198015  204649 pod_ready.go:104] pod "coredns-5dd5756b68-vz7zx" is not "Ready", error: <nil>
	I1119 22:36:41.698204  204649 pod_ready.go:94] pod "coredns-5dd5756b68-vz7zx" is "Ready"
	I1119 22:36:41.698233  204649 pod_ready.go:86] duration metric: took 9.50616482s for pod "coredns-5dd5756b68-vz7zx" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:41.701276  204649 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-264160" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:41.706418  204649 pod_ready.go:94] pod "etcd-old-k8s-version-264160" is "Ready"
	I1119 22:36:41.706451  204649 pod_ready.go:86] duration metric: took 5.148925ms for pod "etcd-old-k8s-version-264160" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:41.709706  204649 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-264160" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:41.715470  204649 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-264160" is "Ready"
	I1119 22:36:41.715499  204649 pod_ready.go:86] duration metric: took 5.766499ms for pod "kube-apiserver-old-k8s-version-264160" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:41.718802  204649 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-264160" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:41.896506  204649 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-264160" is "Ready"
	I1119 22:36:41.896538  204649 pod_ready.go:86] duration metric: took 177.710699ms for pod "kube-controller-manager-old-k8s-version-264160" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:42.096924  204649 pod_ready.go:83] waiting for pod "kube-proxy-zzmnr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:42.496606  204649 pod_ready.go:94] pod "kube-proxy-zzmnr" is "Ready"
	I1119 22:36:42.496635  204649 pod_ready.go:86] duration metric: took 399.679699ms for pod "kube-proxy-zzmnr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:42.696640  204649 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-264160" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:43.096504  204649 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-264160" is "Ready"
	I1119 22:36:43.096533  204649 pod_ready.go:86] duration metric: took 399.863388ms for pod "kube-scheduler-old-k8s-version-264160" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:36:43.096547  204649 pod_ready.go:40] duration metric: took 10.90982149s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:36:43.158402  204649 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1119 22:36:43.161490  204649 out.go:203] 
	W1119 22:36:43.164427  204649 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 22:36:43.167321  204649 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 22:36:43.171088  204649 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-264160" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	85ec8d942d110       1611cd07b61d5       9 seconds ago       Running             busybox                   0                   1b11528fdca0b       busybox                                          default
	0b5a95c859ac3       97e04611ad434       24 seconds ago      Running             coredns                   0                   aa34d2193fc4c       coredns-5dd5756b68-vz7zx                         kube-system
	f62b743b6725e       ba04bb24b9575       24 seconds ago      Running             storage-provisioner       0                   2bededbe57122       storage-provisioner                              kube-system
	3dc4045566ee8       b1a8c6f707935       35 seconds ago      Running             kindnet-cni               0                   232dd2b4b80b5       kindnet-m9nqq                                    kube-system
	e5c22c9877dd1       940f54a5bcae9       37 seconds ago      Running             kube-proxy                0                   f45778acb4883       kube-proxy-zzmnr                                 kube-system
	0aa1bd28b6073       762dce4090c5f       59 seconds ago      Running             kube-scheduler            0                   4b7124d3d4b79       kube-scheduler-old-k8s-version-264160            kube-system
	83a25278b16a7       00543d2fe5d71       59 seconds ago      Running             kube-apiserver            0                   67f5df81322ce       kube-apiserver-old-k8s-version-264160            kube-system
	9ce9313d9aae4       46cc66ccc7c19       59 seconds ago      Running             kube-controller-manager   0                   0783ca7945d35       kube-controller-manager-old-k8s-version-264160   kube-system
	85f86fccea082       9cdd6470f48c8       59 seconds ago      Running             etcd                      0                   4969a45c845f9       etcd-old-k8s-version-264160                      kube-system
	
	
	==> containerd <==
	Nov 19 22:36:31 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:31.649388743Z" level=info msg="CreateContainer within sandbox \"aa34d2193fc4cf037239bc48a6fac96674b060cb63b8de7320bb53007ec52479\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 19 22:36:31 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:31.651192399Z" level=info msg="connecting to shim f62b743b6725ec9ff1e91e664da6c9ce15d837afbab3608cc02fec3c9bd3d929" address="unix:///run/containerd/s/1693fe8eea8ad33a7610805dc3ed40de55c61613614162362386d2386e86ea05" protocol=ttrpc version=3
	Nov 19 22:36:31 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:31.675391127Z" level=info msg="Container 0b5a95c859ac383d11c4aa9fb013d9cb4c21b0ac201d6a26cc3ec130b9027e9f: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:36:31 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:31.690874899Z" level=info msg="CreateContainer within sandbox \"aa34d2193fc4cf037239bc48a6fac96674b060cb63b8de7320bb53007ec52479\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b5a95c859ac383d11c4aa9fb013d9cb4c21b0ac201d6a26cc3ec130b9027e9f\""
	Nov 19 22:36:31 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:31.691891617Z" level=info msg="StartContainer for \"0b5a95c859ac383d11c4aa9fb013d9cb4c21b0ac201d6a26cc3ec130b9027e9f\""
	Nov 19 22:36:31 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:31.692987039Z" level=info msg="connecting to shim 0b5a95c859ac383d11c4aa9fb013d9cb4c21b0ac201d6a26cc3ec130b9027e9f" address="unix:///run/containerd/s/ba4b3d499342aaf3ebd6be16fa5ad2a140167ea49a534a0a812a3977c5dcf983" protocol=ttrpc version=3
	Nov 19 22:36:31 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:31.744679204Z" level=info msg="StartContainer for \"f62b743b6725ec9ff1e91e664da6c9ce15d837afbab3608cc02fec3c9bd3d929\" returns successfully"
	Nov 19 22:36:31 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:31.792512880Z" level=info msg="StartContainer for \"0b5a95c859ac383d11c4aa9fb013d9cb4c21b0ac201d6a26cc3ec130b9027e9f\" returns successfully"
	Nov 19 22:36:43 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:43.710209778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:2af6deb4-937f-4b9b-9de6-995e75a080b8,Namespace:default,Attempt:0,}"
	Nov 19 22:36:43 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:43.790459559Z" level=info msg="connecting to shim 1b11528fdca0ba74e5c7786578d6850eb5b37f9540b5e04e610639ce7fbd811f" address="unix:///run/containerd/s/b036a265eb01a921a8d2ed1a42211f4774df4a741b42e6007a96fa06394b6381" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 22:36:43 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:43.849747951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:2af6deb4-937f-4b9b-9de6-995e75a080b8,Namespace:default,Attempt:0,} returns sandbox id \"1b11528fdca0ba74e5c7786578d6850eb5b37f9540b5e04e610639ce7fbd811f\""
	Nov 19 22:36:43 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:43.854324085Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.047353549Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.049424358Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937184"
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.052705979Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.058110078Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.059158943Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.204520106s"
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.059209750Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.063270047Z" level=info msg="CreateContainer within sandbox \"1b11528fdca0ba74e5c7786578d6850eb5b37f9540b5e04e610639ce7fbd811f\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.078717460Z" level=info msg="Container 85ec8d942d1102ad7f23f0923c0afa921c51c4b09ac0f93dc33203a257d7ca57: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.090943881Z" level=info msg="CreateContainer within sandbox \"1b11528fdca0ba74e5c7786578d6850eb5b37f9540b5e04e610639ce7fbd811f\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"85ec8d942d1102ad7f23f0923c0afa921c51c4b09ac0f93dc33203a257d7ca57\""
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.091676862Z" level=info msg="StartContainer for \"85ec8d942d1102ad7f23f0923c0afa921c51c4b09ac0f93dc33203a257d7ca57\""
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.092550045Z" level=info msg="connecting to shim 85ec8d942d1102ad7f23f0923c0afa921c51c4b09ac0f93dc33203a257d7ca57" address="unix:///run/containerd/s/b036a265eb01a921a8d2ed1a42211f4774df4a741b42e6007a96fa06394b6381" protocol=ttrpc version=3
	Nov 19 22:36:46 old-k8s-version-264160 containerd[760]: time="2025-11-19T22:36:46.167983720Z" level=info msg="StartContainer for \"85ec8d942d1102ad7f23f0923c0afa921c51c4b09ac0f93dc33203a257d7ca57\" returns successfully"
	Nov 19 22:36:52 old-k8s-version-264160 containerd[760]: E1119 22:36:52.581929     760 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [0b5a95c859ac383d11c4aa9fb013d9cb4c21b0ac201d6a26cc3ec130b9027e9f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51463 - 23570 "HINFO IN 6404155507127924057.1273287447177964912. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026393207s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-264160
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-264160
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=old-k8s-version-264160
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_36_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:36:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-264160
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:36:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:36:36 +0000   Wed, 19 Nov 2025 22:35:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:36:36 +0000   Wed, 19 Nov 2025 22:35:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:36:36 +0000   Wed, 19 Nov 2025 22:35:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:36:36 +0000   Wed, 19 Nov 2025 22:36:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-264160
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                b680c3d2-ce1c-409c-bfdc-4a24b39315bd
	  Boot ID:                    b3875353-65b3-44b7-ad72-afadd7e2486a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-5dd5756b68-vz7zx                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     39s
	  kube-system                 etcd-old-k8s-version-264160                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         51s
	  kube-system                 kindnet-m9nqq                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      39s
	  kube-system                 kube-apiserver-old-k8s-version-264160             250m (12%)    0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-controller-manager-old-k8s-version-264160    200m (10%)    0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 kube-proxy-zzmnr                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-scheduler-old-k8s-version-264160             100m (5%)     0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 37s                kube-proxy       
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node old-k8s-version-264160 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node old-k8s-version-264160 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x7 over 61s)  kubelet          Node old-k8s-version-264160 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  61s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 51s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  51s                kubelet          Node old-k8s-version-264160 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s                kubelet          Node old-k8s-version-264160 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s                kubelet          Node old-k8s-version-264160 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  51s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           40s                node-controller  Node old-k8s-version-264160 event: Registered Node old-k8s-version-264160 in Controller
	  Normal  NodeReady                25s                kubelet          Node old-k8s-version-264160 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 21:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.032038] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[Nov19 21:18] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034282] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.730183] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.763794] kauditd_printk_skb: 36 callbacks suppressed
	[Nov19 21:50] hrtimer: interrupt took 11278311 ns
	
	
	==> etcd [85f86fccea0828d06ebe49ecd748897b5c79764ef02605e9b0dcfe4d0da55086] <==
	{"level":"info","ts":"2025-11-19T22:35:56.498496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-19T22:35:56.498586Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-19T22:35:56.499212Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-19T22:35:56.499356Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-19T22:35:56.49937Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-19T22:35:56.500029Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-19T22:35:56.500058Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-19T22:35:57.378189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-19T22:35:57.378405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-19T22:35:57.378513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-19T22:35:57.378604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-19T22:35:57.378648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-19T22:35:57.378765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-19T22:35:57.378861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-19T22:35:57.380547Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:35:57.381686Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-264160 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-19T22:35:57.381779Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T22:35:57.385726Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:35:57.385955Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:35:57.386049Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:35:57.386901Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-19T22:35:57.38702Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T22:35:57.387495Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-19T22:35:57.38756Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-19T22:35:57.38824Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 22:36:56 up  1:18,  0 user,  load average: 2.22, 3.50, 2.75
	Linux old-k8s-version-264160 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3dc4045566ee801891a80913f3c0d08405af235938655312d13ffdb5bece221c] <==
	I1119 22:36:20.789101       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:36:20.789364       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 22:36:20.789559       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:36:20.789578       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:36:20.789592       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:36:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:36:20.990706       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:36:20.990731       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:36:20.990740       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:36:20.992039       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:36:21.190870       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:36:21.190976       1 metrics.go:72] Registering metrics
	I1119 22:36:21.191093       1 controller.go:711] "Syncing nftables rules"
	I1119 22:36:30.994216       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:36:30.994256       1 main.go:301] handling current node
	I1119 22:36:40.992854       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:36:40.992893       1 main.go:301] handling current node
	I1119 22:36:50.992386       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:36:50.992422       1 main.go:301] handling current node
	
	
	==> kube-apiserver [83a25278b16a7bc6a4252ba6f8c2ce8a60621e9d435c828ededf66aecfda2443] <==
	I1119 22:36:02.053875       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1119 22:36:02.055155       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1119 22:36:02.055381       1 shared_informer.go:318] Caches are synced for configmaps
	I1119 22:36:02.055593       1 aggregator.go:166] initial CRD sync complete...
	I1119 22:36:02.055613       1 autoregister_controller.go:141] Starting autoregister controller
	I1119 22:36:02.055620       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:36:02.055627       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:36:02.066246       1 controller.go:624] quota admission added evaluator for: namespaces
	I1119 22:36:02.090717       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1119 22:36:02.094391       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:36:02.747612       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:36:02.754129       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:36:02.754179       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:36:03.457051       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:36:03.510012       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:36:03.578204       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:36:03.591054       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 22:36:03.592389       1 controller.go:624] quota admission added evaluator for: endpoints
	I1119 22:36:03.598109       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:36:03.932055       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1119 22:36:05.470569       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1119 22:36:05.488449       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:36:05.503361       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1119 22:36:17.195970       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1119 22:36:17.744558       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [9ce9313d9aae43100c1f669a0216b1ce028ec3fd90f9042e2780602b3b9dabcf] <==
	I1119 22:36:17.007432       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-old-k8s-version-264160" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1119 22:36:17.007693       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-old-k8s-version-264160" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1119 22:36:17.202717       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1119 22:36:17.309848       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 22:36:17.309885       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1119 22:36:17.345695       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 22:36:17.758353       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-m9nqq"
	I1119 22:36:17.771209       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zzmnr"
	I1119 22:36:17.833691       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vz7zx"
	I1119 22:36:17.844755       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-qtkkx"
	I1119 22:36:17.870437       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="668.610241ms"
	I1119 22:36:17.886833       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.893452ms"
	I1119 22:36:17.887202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="257.726µs"
	I1119 22:36:17.895692       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.147µs"
	I1119 22:36:19.212001       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1119 22:36:19.246883       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-qtkkx"
	I1119 22:36:19.269962       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.063512ms"
	I1119 22:36:19.286597       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.59033ms"
	I1119 22:36:19.287055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="114.554µs"
	I1119 22:36:31.144412       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.433µs"
	I1119 22:36:31.166398       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.171µs"
	I1119 22:36:31.900825       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="227.679µs"
	I1119 22:36:31.988585       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1119 22:36:41.472572       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.52455ms"
	I1119 22:36:41.472677       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.621µs"
	
	
	==> kube-proxy [e5c22c9877dd10241d18184894e9e614c72ec9cfb5a007bdae07416884620fcb] <==
	I1119 22:36:18.757438       1 server_others.go:69] "Using iptables proxy"
	I1119 22:36:18.778593       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1119 22:36:18.913376       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:36:18.915584       1 server_others.go:152] "Using iptables Proxier"
	I1119 22:36:18.915624       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1119 22:36:18.915633       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1119 22:36:18.915677       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1119 22:36:18.915959       1 server.go:846] "Version info" version="v1.28.0"
	I1119 22:36:18.915974       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:36:18.920244       1 config.go:188] "Starting service config controller"
	I1119 22:36:18.920284       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1119 22:36:18.920312       1 config.go:97] "Starting endpoint slice config controller"
	I1119 22:36:18.920331       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1119 22:36:18.920900       1 config.go:315] "Starting node config controller"
	I1119 22:36:18.920980       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1119 22:36:19.021747       1 shared_informer.go:318] Caches are synced for node config
	I1119 22:36:19.021777       1 shared_informer.go:318] Caches are synced for service config
	I1119 22:36:19.021803       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0aa1bd28b60733799ab92c2d108b32fc31d28ba32f45f38e766395ec615ed220] <==
	W1119 22:36:02.058403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1119 22:36:02.058916       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1119 22:36:02.058451       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1119 22:36:02.058979       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1119 22:36:02.058497       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1119 22:36:02.059039       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1119 22:36:02.058567       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1119 22:36:02.059110       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1119 22:36:02.058600       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1119 22:36:02.059179       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1119 22:36:02.058632       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1119 22:36:02.059241       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1119 22:36:02.878544       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1119 22:36:02.878578       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 22:36:02.913574       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1119 22:36:02.913618       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1119 22:36:02.963123       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1119 22:36:02.963158       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1119 22:36:03.017826       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1119 22:36:03.018067       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1119 22:36:03.127020       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1119 22:36:03.127294       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1119 22:36:03.201758       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1119 22:36:03.202031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1119 22:36:05.139254       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 19 22:36:16 old-k8s-version-264160 kubelet[1553]: I1119 22:36:16.880132    1553 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 22:36:17 old-k8s-version-264160 kubelet[1553]: I1119 22:36:17.764724    1553 topology_manager.go:215] "Topology Admit Handler" podUID="2f9f6fbb-c725-49fd-ba3a-c84a7640aac2" podNamespace="kube-system" podName="kindnet-m9nqq"
	Nov 19 22:36:17 old-k8s-version-264160 kubelet[1553]: I1119 22:36:17.781987    1553 topology_manager.go:215] "Topology Admit Handler" podUID="3ee1645f-fba5-4206-bb83-70d298a4c5ac" podNamespace="kube-system" podName="kube-proxy-zzmnr"
	Nov 19 22:36:17 old-k8s-version-264160 kubelet[1553]: I1119 22:36:17.828089    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f9f6fbb-c725-49fd-ba3a-c84a7640aac2-xtables-lock\") pod \"kindnet-m9nqq\" (UID: \"2f9f6fbb-c725-49fd-ba3a-c84a7640aac2\") " pod="kube-system/kindnet-m9nqq"
	Nov 19 22:36:17 old-k8s-version-264160 kubelet[1553]: I1119 22:36:17.828147    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3ee1645f-fba5-4206-bb83-70d298a4c5ac-kube-proxy\") pod \"kube-proxy-zzmnr\" (UID: \"3ee1645f-fba5-4206-bb83-70d298a4c5ac\") " pod="kube-system/kube-proxy-zzmnr"
	Nov 19 22:36:17 old-k8s-version-264160 kubelet[1553]: I1119 22:36:17.828177    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvk49\" (UniqueName: \"kubernetes.io/projected/2f9f6fbb-c725-49fd-ba3a-c84a7640aac2-kube-api-access-kvk49\") pod \"kindnet-m9nqq\" (UID: \"2f9f6fbb-c725-49fd-ba3a-c84a7640aac2\") " pod="kube-system/kindnet-m9nqq"
	Nov 19 22:36:17 old-k8s-version-264160 kubelet[1553]: I1119 22:36:17.828200    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ee1645f-fba5-4206-bb83-70d298a4c5ac-xtables-lock\") pod \"kube-proxy-zzmnr\" (UID: \"3ee1645f-fba5-4206-bb83-70d298a4c5ac\") " pod="kube-system/kube-proxy-zzmnr"
	Nov 19 22:36:17 old-k8s-version-264160 kubelet[1553]: I1119 22:36:17.828223    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ee1645f-fba5-4206-bb83-70d298a4c5ac-lib-modules\") pod \"kube-proxy-zzmnr\" (UID: \"3ee1645f-fba5-4206-bb83-70d298a4c5ac\") " pod="kube-system/kube-proxy-zzmnr"
	Nov 19 22:36:17 old-k8s-version-264160 kubelet[1553]: I1119 22:36:17.828251    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f9f6fbb-c725-49fd-ba3a-c84a7640aac2-lib-modules\") pod \"kindnet-m9nqq\" (UID: \"2f9f6fbb-c725-49fd-ba3a-c84a7640aac2\") " pod="kube-system/kindnet-m9nqq"
	Nov 19 22:36:17 old-k8s-version-264160 kubelet[1553]: I1119 22:36:17.828274    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2f9f6fbb-c725-49fd-ba3a-c84a7640aac2-cni-cfg\") pod \"kindnet-m9nqq\" (UID: \"2f9f6fbb-c725-49fd-ba3a-c84a7640aac2\") " pod="kube-system/kindnet-m9nqq"
	Nov 19 22:36:17 old-k8s-version-264160 kubelet[1553]: I1119 22:36:17.828297    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc7w4\" (UniqueName: \"kubernetes.io/projected/3ee1645f-fba5-4206-bb83-70d298a4c5ac-kube-api-access-fc7w4\") pod \"kube-proxy-zzmnr\" (UID: \"3ee1645f-fba5-4206-bb83-70d298a4c5ac\") " pod="kube-system/kube-proxy-zzmnr"
	Nov 19 22:36:20 old-k8s-version-264160 kubelet[1553]: I1119 22:36:20.875429    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-m9nqq" podStartSLOduration=1.9015295540000001 podCreationTimestamp="2025-11-19 22:36:17 +0000 UTC" firstStartedPulling="2025-11-19 22:36:18.551512561 +0000 UTC m=+13.118052946" lastFinishedPulling="2025-11-19 22:36:20.525369265 +0000 UTC m=+15.091909650" observedRunningTime="2025-11-19 22:36:20.875315381 +0000 UTC m=+15.441855783" watchObservedRunningTime="2025-11-19 22:36:20.875386258 +0000 UTC m=+15.441926643"
	Nov 19 22:36:20 old-k8s-version-264160 kubelet[1553]: I1119 22:36:20.876203    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zzmnr" podStartSLOduration=3.87615718 podCreationTimestamp="2025-11-19 22:36:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:36:18.872222396 +0000 UTC m=+13.438762780" watchObservedRunningTime="2025-11-19 22:36:20.87615718 +0000 UTC m=+15.442697581"
	Nov 19 22:36:31 old-k8s-version-264160 kubelet[1553]: I1119 22:36:31.092782    1553 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 19 22:36:31 old-k8s-version-264160 kubelet[1553]: I1119 22:36:31.139366    1553 topology_manager.go:215] "Topology Admit Handler" podUID="7e7645ad-49a9-4f0c-89cc-128538e4d95c" podNamespace="kube-system" podName="coredns-5dd5756b68-vz7zx"
	Nov 19 22:36:31 old-k8s-version-264160 kubelet[1553]: I1119 22:36:31.152446    1553 topology_manager.go:215] "Topology Admit Handler" podUID="8e2dda77-5a6d-4796-926b-5a06158f8cdf" podNamespace="kube-system" podName="storage-provisioner"
	Nov 19 22:36:31 old-k8s-version-264160 kubelet[1553]: I1119 22:36:31.233967    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e7645ad-49a9-4f0c-89cc-128538e4d95c-config-volume\") pod \"coredns-5dd5756b68-vz7zx\" (UID: \"7e7645ad-49a9-4f0c-89cc-128538e4d95c\") " pod="kube-system/coredns-5dd5756b68-vz7zx"
	Nov 19 22:36:31 old-k8s-version-264160 kubelet[1553]: I1119 22:36:31.234065    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkc9q\" (UniqueName: \"kubernetes.io/projected/7e7645ad-49a9-4f0c-89cc-128538e4d95c-kube-api-access-pkc9q\") pod \"coredns-5dd5756b68-vz7zx\" (UID: \"7e7645ad-49a9-4f0c-89cc-128538e4d95c\") " pod="kube-system/coredns-5dd5756b68-vz7zx"
	Nov 19 22:36:31 old-k8s-version-264160 kubelet[1553]: I1119 22:36:31.234125    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8e2dda77-5a6d-4796-926b-5a06158f8cdf-tmp\") pod \"storage-provisioner\" (UID: \"8e2dda77-5a6d-4796-926b-5a06158f8cdf\") " pod="kube-system/storage-provisioner"
	Nov 19 22:36:31 old-k8s-version-264160 kubelet[1553]: I1119 22:36:31.234229    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dt4z\" (UniqueName: \"kubernetes.io/projected/8e2dda77-5a6d-4796-926b-5a06158f8cdf-kube-api-access-4dt4z\") pod \"storage-provisioner\" (UID: \"8e2dda77-5a6d-4796-926b-5a06158f8cdf\") " pod="kube-system/storage-provisioner"
	Nov 19 22:36:31 old-k8s-version-264160 kubelet[1553]: I1119 22:36:31.928942    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vz7zx" podStartSLOduration=14.928898879 podCreationTimestamp="2025-11-19 22:36:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:36:31.902288078 +0000 UTC m=+26.468828471" watchObservedRunningTime="2025-11-19 22:36:31.928898879 +0000 UTC m=+26.495439272"
	Nov 19 22:36:31 old-k8s-version-264160 kubelet[1553]: I1119 22:36:31.929197    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.929173877 podCreationTimestamp="2025-11-19 22:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:36:31.926923668 +0000 UTC m=+26.493464217" watchObservedRunningTime="2025-11-19 22:36:31.929173877 +0000 UTC m=+26.495714286"
	Nov 19 22:36:43 old-k8s-version-264160 kubelet[1553]: I1119 22:36:43.392110    1553 topology_manager.go:215] "Topology Admit Handler" podUID="2af6deb4-937f-4b9b-9de6-995e75a080b8" podNamespace="default" podName="busybox"
	Nov 19 22:36:43 old-k8s-version-264160 kubelet[1553]: I1119 22:36:43.523830    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb7ph\" (UniqueName: \"kubernetes.io/projected/2af6deb4-937f-4b9b-9de6-995e75a080b8-kube-api-access-kb7ph\") pod \"busybox\" (UID: \"2af6deb4-937f-4b9b-9de6-995e75a080b8\") " pod="default/busybox"
	Nov 19 22:36:46 old-k8s-version-264160 kubelet[1553]: I1119 22:36:46.935525    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.727293354 podCreationTimestamp="2025-11-19 22:36:43 +0000 UTC" firstStartedPulling="2025-11-19 22:36:43.851422103 +0000 UTC m=+38.417962488" lastFinishedPulling="2025-11-19 22:36:46.059604676 +0000 UTC m=+40.626145060" observedRunningTime="2025-11-19 22:36:46.934118134 +0000 UTC m=+41.500658519" watchObservedRunningTime="2025-11-19 22:36:46.935475926 +0000 UTC m=+41.502016319"
	
	
	==> storage-provisioner [f62b743b6725ec9ff1e91e664da6c9ce15d837afbab3608cc02fec3c9bd3d929] <==
	I1119 22:36:31.737660       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:36:31.757257       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:36:31.757310       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1119 22:36:31.769006       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:36:31.771663       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-264160_88781c45-d0c6-484e-abf4-8c2df680f8d6!
	I1119 22:36:31.772385       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62b15298-f39b-43d5-9d35-ddeafad4bd4d", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-264160_88781c45-d0c6-484e-abf4-8c2df680f8d6 became leader
	I1119 22:36:31.872085       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-264160_88781c45-d0c6-484e-abf4-8c2df680f8d6!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-264160 -n old-k8s-version-264160
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-264160 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (13.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-570856 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7195bbcd-aea0-4b92-b3d2-0e76651191f2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7195bbcd-aea0-4b92-b3d2-0e76651191f2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003679837s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-570856 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-570856
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-570856:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6c73c273c7b038693db24d99dfbb30acc51038433277e4b235b2c5ad0e88c602",
	        "Created": "2025-11-19T22:38:07.504803766Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 214567,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:38:07.603062132Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/6c73c273c7b038693db24d99dfbb30acc51038433277e4b235b2c5ad0e88c602/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6c73c273c7b038693db24d99dfbb30acc51038433277e4b235b2c5ad0e88c602/hostname",
	        "HostsPath": "/var/lib/docker/containers/6c73c273c7b038693db24d99dfbb30acc51038433277e4b235b2c5ad0e88c602/hosts",
	        "LogPath": "/var/lib/docker/containers/6c73c273c7b038693db24d99dfbb30acc51038433277e4b235b2c5ad0e88c602/6c73c273c7b038693db24d99dfbb30acc51038433277e4b235b2c5ad0e88c602-json.log",
	        "Name": "/default-k8s-diff-port-570856",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-570856:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-570856",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6c73c273c7b038693db24d99dfbb30acc51038433277e4b235b2c5ad0e88c602",
	                "LowerDir": "/var/lib/docker/overlay2/86ab49af5f948a1a5c976977f23c42663d73cdc908842eb49b25686c33aa6cf2-init/diff:/var/lib/docker/overlay2/b6ebc9601ea0ae08484f263713f3358dd93f7748ebfafbd9155229908dee9606/diff",
	                "MergedDir": "/var/lib/docker/overlay2/86ab49af5f948a1a5c976977f23c42663d73cdc908842eb49b25686c33aa6cf2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/86ab49af5f948a1a5c976977f23c42663d73cdc908842eb49b25686c33aa6cf2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/86ab49af5f948a1a5c976977f23c42663d73cdc908842eb49b25686c33aa6cf2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-570856",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-570856/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-570856",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-570856",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-570856",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2d5ab0decbee6ec3a9f7deffefb376d8c2a3acc5e4211707c845f8a635aa7fb0",
	            "SandboxKey": "/var/run/docker/netns/2d5ab0decbee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-570856": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:ca:56:88:07:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0f1dbc601a674795e4d1b7ef6c43743f5fa7dc65e3242142ad674b4d86c827a0",
	                    "EndpointID": "0abe73e0e2f058acdd1275bb70410bb04d4c2ac43764ee16a95613ed71ee9b48",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-570856",
	                        "6c73c273c7b0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-570856 -n default-k8s-diff-port-570856
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-570856 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-570856 logs -n 25: (1.267937295s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p cilium-156590 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-156590                │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-156590                │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo crio config                                                                                                                                                                                                                   │ cilium-156590                │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ delete  │ -p cilium-156590                                                                                                                                                                                                                                    │ cilium-156590                │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p cert-expiration-750367 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-750367       │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ ssh     │ force-systemd-env-388402 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-388402     │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ delete  │ -p force-systemd-env-388402                                                                                                                                                                                                                         │ force-systemd-env-388402     │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p cert-options-815306 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-815306          │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ cert-options-815306 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-815306          │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p cert-options-815306 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-815306          │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ delete  │ -p cert-options-815306                                                                                                                                                                                                                              │ cert-options-815306          │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ start   │ -p old-k8s-version-264160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:36 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-264160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:36 UTC │ 19 Nov 25 22:36 UTC │
	│ stop    │ -p old-k8s-version-264160 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:36 UTC │ 19 Nov 25 22:37 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-264160 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:37 UTC │
	│ start   │ -p old-k8s-version-264160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:37 UTC │
	│ image   │ old-k8s-version-264160 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:37 UTC │
	│ pause   │ -p old-k8s-version-264160 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:37 UTC │
	│ start   │ -p cert-expiration-750367 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-750367       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:38 UTC │
	│ unpause │ -p old-k8s-version-264160 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:37 UTC │
	│ delete  │ -p old-k8s-version-264160                                                                                                                                                                                                                           │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:38 UTC │
	│ delete  │ -p old-k8s-version-264160                                                                                                                                                                                                                           │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:38 UTC │ 19 Nov 25 22:38 UTC │
	│ start   │ -p default-k8s-diff-port-570856 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-570856 │ jenkins │ v1.37.0 │ 19 Nov 25 22:38 UTC │ 19 Nov 25 22:39 UTC │
	│ delete  │ -p cert-expiration-750367                                                                                                                                                                                                                           │ cert-expiration-750367       │ jenkins │ v1.37.0 │ 19 Nov 25 22:38 UTC │ 19 Nov 25 22:38 UTC │
	│ start   │ -p embed-certs-227235 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-227235           │ jenkins │ v1.37.0 │ 19 Nov 25 22:38 UTC │ 19 Nov 25 22:39 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:38:08
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:38:08.697293  215017 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:38:08.704083  215017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:38:08.704139  215017 out.go:374] Setting ErrFile to fd 2...
	I1119 22:38:08.704160  215017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:38:08.706471  215017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
	I1119 22:38:08.707066  215017 out.go:368] Setting JSON to false
	I1119 22:38:08.712552  215017 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4810,"bootTime":1763587079,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1119 22:38:08.712658  215017 start.go:143] virtualization:  
	I1119 22:38:08.726924  215017 out.go:179] * [embed-certs-227235] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:38:08.730374  215017 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:38:08.730495  215017 notify.go:221] Checking for updates...
	I1119 22:38:08.738314  215017 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:38:08.741839  215017 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:38:08.750729  215017 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube
	I1119 22:38:08.753969  215017 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:38:08.758263  215017 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:38:08.761943  215017 config.go:182] Loaded profile config "default-k8s-diff-port-570856": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:38:08.762046  215017 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:38:08.820199  215017 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:38:08.820314  215017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:38:08.984129  215017 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-19 22:38:08.967483926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:38:08.984262  215017 docker.go:319] overlay module found
	I1119 22:38:08.987717  215017 out.go:179] * Using the docker driver based on user configuration
	I1119 22:38:08.990549  215017 start.go:309] selected driver: docker
	I1119 22:38:08.990571  215017 start.go:930] validating driver "docker" against <nil>
	I1119 22:38:08.990586  215017 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:38:08.991509  215017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:38:09.111798  215017 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-19 22:38:09.089203249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:38:09.111938  215017 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:38:09.112256  215017 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:38:09.116504  215017 out.go:179] * Using Docker driver with root privileges
	I1119 22:38:09.124274  215017 cni.go:84] Creating CNI manager for ""
	I1119 22:38:09.124350  215017 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:38:09.124363  215017 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:38:09.124453  215017 start.go:353] cluster config:
	{Name:embed-certs-227235 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-227235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:38:09.127735  215017 out.go:179] * Starting "embed-certs-227235" primary control-plane node in "embed-certs-227235" cluster
	I1119 22:38:09.130607  215017 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 22:38:09.133523  215017 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:38:09.136391  215017 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:38:09.136441  215017 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1119 22:38:09.136452  215017 cache.go:65] Caching tarball of preloaded images
	I1119 22:38:09.136462  215017 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:38:09.136539  215017 preload.go:238] Found /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1119 22:38:09.136547  215017 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1119 22:38:09.136651  215017 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/config.json ...
	I1119 22:38:09.136675  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/config.json: {Name:mk1b25f2623abcf89d25348624125d2f29b1b611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:09.183694  215017 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:38:09.183719  215017 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:38:09.183733  215017 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:38:09.183759  215017 start.go:360] acquireMachinesLock for embed-certs-227235: {Name:mk510c3d29263bf54ad7e262aba43b0a3739a3e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:38:09.184753  215017 start.go:364] duration metric: took 969.151µs to acquireMachinesLock for "embed-certs-227235"
	I1119 22:38:09.184791  215017 start.go:93] Provisioning new machine with config: &{Name:embed-certs-227235 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-227235 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:38:09.184859  215017 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:38:07.391014  213719 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-570856:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.786525535s)
	I1119 22:38:07.391041  213719 kic.go:203] duration metric: took 4.786659493s to extract preloaded images to volume ...
	W1119 22:38:07.391183  213719 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 22:38:07.391347  213719 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:38:07.481611  213719 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-570856 --name default-k8s-diff-port-570856 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-570856 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-570856 --network default-k8s-diff-port-570856 --ip 192.168.76.2 --volume default-k8s-diff-port-570856:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:38:07.963072  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Running}}
	I1119 22:38:07.992676  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:08.024300  213719 cli_runner.go:164] Run: docker exec default-k8s-diff-port-570856 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:38:08.120309  213719 oci.go:144] the created container "default-k8s-diff-port-570856" has a running status.
	I1119 22:38:08.120344  213719 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa...
	I1119 22:38:09.379092  213719 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:38:09.429394  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:09.452972  213719 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:38:09.452994  213719 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-570856 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:38:09.517582  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:09.543798  213719 machine.go:94] provisionDockerMachine start ...
	I1119 22:38:09.543906  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:09.574203  213719 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:09.574537  213719 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1119 22:38:09.574556  213719 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:38:09.753905  213719 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-570856
	
	I1119 22:38:09.753978  213719 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-570856"
	I1119 22:38:09.754102  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:09.788736  213719 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:09.789069  213719 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1119 22:38:09.789083  213719 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-570856 && echo "default-k8s-diff-port-570856" | sudo tee /etc/hostname
	I1119 22:38:10.027975  213719 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-570856
	
	I1119 22:38:10.028087  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:10.053594  213719 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:10.053941  213719 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1119 22:38:10.053963  213719 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-570856' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-570856/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-570856' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:38:10.228136  213719 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:38:10.228163  213719 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-2347/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-2347/.minikube}
	I1119 22:38:10.228198  213719 ubuntu.go:190] setting up certificates
	I1119 22:38:10.228211  213719 provision.go:84] configureAuth start
	I1119 22:38:10.228271  213719 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-570856
	I1119 22:38:10.260529  213719 provision.go:143] copyHostCerts
	I1119 22:38:10.260589  213719 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem, removing ...
	I1119 22:38:10.260598  213719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem
	I1119 22:38:10.262543  213719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem (1082 bytes)
	I1119 22:38:10.262680  213719 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem, removing ...
	I1119 22:38:10.262696  213719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem
	I1119 22:38:10.262738  213719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem (1123 bytes)
	I1119 22:38:10.262811  213719 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem, removing ...
	I1119 22:38:10.262821  213719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem
	I1119 22:38:10.262848  213719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem (1675 bytes)
	I1119 22:38:10.262912  213719 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-570856 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-570856 localhost minikube]
	I1119 22:38:10.546932  213719 provision.go:177] copyRemoteCerts
	I1119 22:38:10.547006  213719 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:38:10.547053  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:10.566569  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:10.670710  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:38:10.689919  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 22:38:10.709802  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:38:10.729254  213719 provision.go:87] duration metric: took 501.020286ms to configureAuth
	I1119 22:38:10.729341  213719 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:38:10.729558  213719 config.go:182] Loaded profile config "default-k8s-diff-port-570856": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:38:10.729599  213719 machine.go:97] duration metric: took 1.185770725s to provisionDockerMachine
	I1119 22:38:10.729629  213719 client.go:176] duration metric: took 8.893120772s to LocalClient.Create
	I1119 22:38:10.729671  213719 start.go:167] duration metric: took 8.893208625s to libmachine.API.Create "default-k8s-diff-port-570856"
	I1119 22:38:10.729697  213719 start.go:293] postStartSetup for "default-k8s-diff-port-570856" (driver="docker")
	I1119 22:38:10.729723  213719 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:38:10.729835  213719 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:38:10.729907  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:10.749040  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:10.851117  213719 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:38:10.854970  213719 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:38:10.855002  213719 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:38:10.855018  213719 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/addons for local assets ...
	I1119 22:38:10.855073  213719 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/files for local assets ...
	I1119 22:38:10.855157  213719 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem -> 41442.pem in /etc/ssl/certs
	I1119 22:38:10.855262  213719 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:38:10.863647  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:38:10.886722  213719 start.go:296] duration metric: took 156.987573ms for postStartSetup
	I1119 22:38:10.887078  213719 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-570856
	I1119 22:38:10.911718  213719 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/config.json ...
	I1119 22:38:10.911987  213719 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:38:10.912028  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:10.930471  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:11.027896  213719 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:38:11.033540  213719 start.go:128] duration metric: took 9.200775241s to createHost
	I1119 22:38:11.033562  213719 start.go:83] releasing machines lock for "default-k8s-diff-port-570856", held for 9.200980978s
	I1119 22:38:11.033643  213719 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-570856
	I1119 22:38:11.053285  213719 ssh_runner.go:195] Run: cat /version.json
	I1119 22:38:11.053332  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:11.053561  213719 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:38:11.053645  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:11.092834  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:11.096401  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:11.213924  213719 ssh_runner.go:195] Run: systemctl --version
	I1119 22:38:11.315479  213719 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:38:11.320121  213719 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:38:11.320192  213719 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:38:11.356242  213719 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 22:38:11.356267  213719 start.go:496] detecting cgroup driver to use...
	I1119 22:38:11.356302  213719 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 22:38:11.356353  213719 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:38:11.373019  213719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:38:11.387519  213719 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:38:11.387580  213719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:38:11.404728  213719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:38:11.423798  213719 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:38:11.599278  213719 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:38:11.778834  213719 docker.go:234] disabling docker service ...
	I1119 22:38:11.778912  213719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:38:11.811353  213719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:38:11.835015  213719 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:38:11.988384  213719 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:38:12.144244  213719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:38:12.158812  213719 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:38:12.181589  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 22:38:12.191717  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:38:12.200100  213719 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1119 22:38:12.200165  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1119 22:38:12.208392  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:38:12.216869  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:38:12.225624  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:38:12.234125  213719 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:38:12.241943  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:38:12.250703  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:38:12.259235  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:38:12.267694  213719 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:38:12.275336  213719 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:38:12.282663  213719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:12.447019  213719 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:38:12.641085  213719 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:38:12.641164  213719 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:38:12.647323  213719 start.go:564] Will wait 60s for crictl version
	I1119 22:38:12.647400  213719 ssh_runner.go:195] Run: which crictl
	I1119 22:38:12.654067  213719 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:38:12.706495  213719 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:38:12.706598  213719 ssh_runner.go:195] Run: containerd --version
	I1119 22:38:12.728227  213719 ssh_runner.go:195] Run: containerd --version
	I1119 22:38:12.756769  213719 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 22:38:09.188165  215017 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:38:09.188412  215017 start.go:159] libmachine.API.Create for "embed-certs-227235" (driver="docker")
	I1119 22:38:09.188460  215017 client.go:173] LocalClient.Create starting
	I1119 22:38:09.188522  215017 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem
	I1119 22:38:09.188557  215017 main.go:143] libmachine: Decoding PEM data...
	I1119 22:38:09.188575  215017 main.go:143] libmachine: Parsing certificate...
	I1119 22:38:09.188626  215017 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem
	I1119 22:38:09.188645  215017 main.go:143] libmachine: Decoding PEM data...
	I1119 22:38:09.188658  215017 main.go:143] libmachine: Parsing certificate...
	I1119 22:38:09.189025  215017 cli_runner.go:164] Run: docker network inspect embed-certs-227235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:38:09.226353  215017 cli_runner.go:211] docker network inspect embed-certs-227235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:38:09.227297  215017 network_create.go:284] running [docker network inspect embed-certs-227235] to gather additional debugging logs...
	I1119 22:38:09.227404  215017 cli_runner.go:164] Run: docker network inspect embed-certs-227235
	W1119 22:38:09.248961  215017 cli_runner.go:211] docker network inspect embed-certs-227235 returned with exit code 1
	I1119 22:38:09.248988  215017 network_create.go:287] error running [docker network inspect embed-certs-227235]: docker network inspect embed-certs-227235: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-227235 not found
	I1119 22:38:09.249019  215017 network_create.go:289] output of [docker network inspect embed-certs-227235]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-227235 not found
	
	** /stderr **
	I1119 22:38:09.249110  215017 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:38:09.295459  215017 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b0fa93c84379 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:8f:4f:8f:5a:a3} reservation:<nil>}
	I1119 22:38:09.295758  215017 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-141c656f658f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:30:08:ea:1a:b9} reservation:<nil>}
	I1119 22:38:09.296184  215017 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-aae633a5ffae IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:73:d8:2e:30:94} reservation:<nil>}
	I1119 22:38:09.296454  215017 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0f1dbc601a67 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:de:02:5d:17:f2:79} reservation:<nil>}
	I1119 22:38:09.296821  215017 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a30110}
	I1119 22:38:09.296836  215017 network_create.go:124] attempt to create docker network embed-certs-227235 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1119 22:38:09.296890  215017 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-227235 embed-certs-227235
	I1119 22:38:09.389450  215017 network_create.go:108] docker network embed-certs-227235 192.168.85.0/24 created
	I1119 22:38:09.389488  215017 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-227235" container
	I1119 22:38:09.389570  215017 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:38:09.426012  215017 cli_runner.go:164] Run: docker volume create embed-certs-227235 --label name.minikube.sigs.k8s.io=embed-certs-227235 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:38:09.458413  215017 oci.go:103] Successfully created a docker volume embed-certs-227235
	I1119 22:38:09.458493  215017 cli_runner.go:164] Run: docker run --rm --name embed-certs-227235-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-227235 --entrypoint /usr/bin/test -v embed-certs-227235:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:38:10.048314  215017 oci.go:107] Successfully prepared a docker volume embed-certs-227235
	I1119 22:38:10.048380  215017 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:38:10.048394  215017 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 22:38:10.048475  215017 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-227235:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 22:38:12.761129  213719 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-570856 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:38:12.776448  213719 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 22:38:12.782082  213719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:38:12.793881  213719 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-570856 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-570856 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:38:12.794007  213719 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:38:12.794066  213719 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:38:12.828546  213719 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:38:12.828565  213719 containerd.go:534] Images already preloaded, skipping extraction
	I1119 22:38:12.828628  213719 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:38:12.874453  213719 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:38:12.874474  213719 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:38:12.874485  213719 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 containerd true true} ...
	I1119 22:38:12.874575  213719 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-570856 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-570856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:38:12.874636  213719 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:38:12.913225  213719 cni.go:84] Creating CNI manager for ""
	I1119 22:38:12.913245  213719 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:38:12.913259  213719 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:38:12.913282  213719 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-570856 NodeName:default-k8s-diff-port-570856 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:38:12.913398  213719 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-570856"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:38:12.913465  213719 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:38:12.935388  213719 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:38:12.935468  213719 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:38:12.971226  213719 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1119 22:38:13.007966  213719 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:38:13.024911  213719 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2241 bytes)
	I1119 22:38:13.042516  213719 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:38:13.046335  213719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:38:13.059831  213719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:13.191953  213719 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:38:13.211424  213719 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856 for IP: 192.168.76.2
	I1119 22:38:13.211448  213719 certs.go:195] generating shared ca certs ...
	I1119 22:38:13.211464  213719 certs.go:227] acquiring lock for ca certs: {Name:mk76285c445bf14c1e73dedba3201c9181209ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:13.211598  213719 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key
	I1119 22:38:13.211646  213719 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key
	I1119 22:38:13.211656  213719 certs.go:257] generating profile certs ...
	I1119 22:38:13.211720  213719 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.key
	I1119 22:38:13.211738  213719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.crt with IP's: []
	I1119 22:38:13.477759  213719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.crt ...
	I1119 22:38:13.477790  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.crt: {Name:mk4af4f401c57a7635e92da9feef7f2a7cfe3346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:13.477979  213719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.key ...
	I1119 22:38:13.477993  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.key: {Name:mkf947f0bf4e302c69721a8e2f74d4a272d67d4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:13.478093  213719 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key.8301174b
	I1119 22:38:13.478112  213719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt.8301174b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1119 22:38:13.929859  213719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt.8301174b ...
	I1119 22:38:13.929894  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt.8301174b: {Name:mkb8c9d5541b894a86911cf54efc4b7ac6afa1c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:13.930079  213719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key.8301174b ...
	I1119 22:38:13.930094  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key.8301174b: {Name:mk87a24e67d10968973a6f22462b3f5c313a93de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:13.930252  213719 certs.go:382] copying /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt.8301174b -> /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt
	I1119 22:38:13.930347  213719 certs.go:386] copying /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key.8301174b -> /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key
	I1119 22:38:13.930411  213719 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.key
	I1119 22:38:13.930431  213719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.crt with IP's: []
	I1119 22:38:14.332796  213719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.crt ...
	I1119 22:38:14.332825  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.crt: {Name:mkc687d4f88c0016e52dc106cbb67f62cb641716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:14.339910  213719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.key ...
	I1119 22:38:14.339932  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.key: {Name:mk85a94508f4f26fe196530cf3fdf265d53e1f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:14.340150  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem (1338 bytes)
	W1119 22:38:14.340197  213719 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144_empty.pem, impossibly tiny 0 bytes
	I1119 22:38:14.340211  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:38:14.340237  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:38:14.340265  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:38:14.340292  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem (1675 bytes)
	I1119 22:38:14.340340  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:38:14.340962  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:38:14.361559  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1119 22:38:14.382612  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:38:14.402496  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:38:14.420924  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:38:14.441447  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:38:14.460685  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:38:14.479294  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:38:14.497456  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem --> /usr/share/ca-certificates/4144.pem (1338 bytes)
	I1119 22:38:14.516533  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /usr/share/ca-certificates/41442.pem (1708 bytes)
	I1119 22:38:14.535911  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:38:14.553295  213719 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:38:14.567201  213719 ssh_runner.go:195] Run: openssl version
	I1119 22:38:14.573427  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4144.pem && ln -fs /usr/share/ca-certificates/4144.pem /etc/ssl/certs/4144.pem"
	I1119 22:38:14.582011  213719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4144.pem
	I1119 22:38:14.585596  213719 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/4144.pem
	I1119 22:38:14.585711  213719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4144.pem
	I1119 22:38:14.626575  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4144.pem /etc/ssl/certs/51391683.0"
	I1119 22:38:14.635818  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41442.pem && ln -fs /usr/share/ca-certificates/41442.pem /etc/ssl/certs/41442.pem"
	I1119 22:38:14.644258  213719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41442.pem
	I1119 22:38:14.648142  213719 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/41442.pem
	I1119 22:38:14.648249  213719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41442.pem
	I1119 22:38:14.689425  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41442.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:38:14.698767  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:38:14.708989  213719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:14.713003  213719 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:14.713064  213719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:14.755515  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:38:14.766003  213719 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:38:14.769904  213719 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:38:14.769997  213719 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-570856 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-570856 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:38:14.770068  213719 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:38:14.770172  213719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:38:14.831712  213719 cri.go:89] found id: ""
	I1119 22:38:14.831793  213719 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:38:14.844012  213719 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:38:14.859844  213719 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:38:14.859902  213719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:38:14.875606  213719 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:38:14.875626  213719 kubeadm.go:158] found existing configuration files:
	
	I1119 22:38:14.875678  213719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1119 22:38:14.887366  213719 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:38:14.887426  213719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:38:14.898741  213719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1119 22:38:14.907757  213719 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:38:14.907816  213719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:38:14.915056  213719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1119 22:38:14.925190  213719 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:38:14.925246  213719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:38:14.933043  213719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1119 22:38:14.943964  213719 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:38:14.944080  213719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:38:14.956850  213719 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:38:15.022467  213719 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:38:15.022528  213719 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:38:15.074445  213719 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:38:15.074520  213719 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 22:38:15.074585  213719 kubeadm.go:319] OS: Linux
	I1119 22:38:15.074665  213719 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:38:15.074741  213719 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 22:38:15.074834  213719 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:38:15.074895  213719 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:38:15.074955  213719 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:38:15.075040  213719 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:38:15.075127  213719 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:38:15.075186  213719 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:38:15.075235  213719 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 22:38:15.163382  213719 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:38:15.163500  213719 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:38:15.163599  213719 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:38:15.178538  213719 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:38:15.183821  213719 out.go:252]   - Generating certificates and keys ...
	I1119 22:38:15.183926  213719 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:38:15.184002  213719 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:38:16.331729  213719 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:38:14.780147  215017 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-227235:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.73163045s)
	I1119 22:38:14.780195  215017 kic.go:203] duration metric: took 4.731797196s to extract preloaded images to volume ...
	W1119 22:38:14.780320  215017 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 22:38:14.780432  215017 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:38:14.866741  215017 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-227235 --name embed-certs-227235 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-227235 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-227235 --network embed-certs-227235 --ip 192.168.85.2 --volume embed-certs-227235:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:38:15.242087  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Running}}
	I1119 22:38:15.266134  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:15.289559  215017 cli_runner.go:164] Run: docker exec embed-certs-227235 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:38:15.358592  215017 oci.go:144] the created container "embed-certs-227235" has a running status.
	I1119 22:38:15.358618  215017 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa...
	I1119 22:38:16.151858  215017 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:38:16.174089  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:16.193774  215017 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:38:16.193801  215017 kic_runner.go:114] Args: [docker exec --privileged embed-certs-227235 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:38:16.253392  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:16.274685  215017 machine.go:94] provisionDockerMachine start ...
	I1119 22:38:16.274793  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:16.295933  215017 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:16.296265  215017 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1119 22:38:16.296279  215017 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:38:16.296925  215017 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 22:38:16.648850  213719 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:38:17.027534  213719 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:38:17.535405  213719 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:38:18.457071  213719 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:38:18.457651  213719 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-570856 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:38:18.804201  213719 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:38:18.804516  213719 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-570856 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:38:19.251890  213719 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:38:19.443919  213719 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:38:19.989042  213719 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:38:19.989481  213719 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:38:20.248156  213719 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:38:20.575822  213719 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:38:21.322497  213719 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:38:21.582497  213719 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:38:22.046631  213719 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:38:22.048792  213719 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:38:22.056417  213719 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:38:19.458283  215017 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-227235
	
	I1119 22:38:19.458361  215017 ubuntu.go:182] provisioning hostname "embed-certs-227235"
	I1119 22:38:19.458439  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:19.482663  215017 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:19.482955  215017 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1119 22:38:19.482966  215017 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-227235 && echo "embed-certs-227235" | sudo tee /etc/hostname
	I1119 22:38:19.668227  215017 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-227235
	
	I1119 22:38:19.668364  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:19.696161  215017 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:19.696518  215017 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1119 22:38:19.696542  215017 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-227235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-227235/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-227235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:38:19.844090  215017 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:38:19.844206  215017 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-2347/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-2347/.minikube}
	I1119 22:38:19.844292  215017 ubuntu.go:190] setting up certificates
	I1119 22:38:19.844349  215017 provision.go:84] configureAuth start
	I1119 22:38:19.844460  215017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-227235
	I1119 22:38:19.871920  215017 provision.go:143] copyHostCerts
	I1119 22:38:19.871992  215017 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem, removing ...
	I1119 22:38:19.872014  215017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem
	I1119 22:38:19.872097  215017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem (1082 bytes)
	I1119 22:38:19.872221  215017 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem, removing ...
	I1119 22:38:19.872227  215017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem
	I1119 22:38:19.872260  215017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem (1123 bytes)
	I1119 22:38:19.872326  215017 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem, removing ...
	I1119 22:38:19.872335  215017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem
	I1119 22:38:19.872358  215017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem (1675 bytes)
	I1119 22:38:19.872412  215017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem org=jenkins.embed-certs-227235 san=[127.0.0.1 192.168.85.2 embed-certs-227235 localhost minikube]
	I1119 22:38:20.323404  215017 provision.go:177] copyRemoteCerts
	I1119 22:38:20.323526  215017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:38:20.323586  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:20.356892  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:20.470993  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:38:20.504362  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1119 22:38:20.524210  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:38:20.544124  215017 provision.go:87] duration metric: took 699.7216ms to configureAuth
	I1119 22:38:20.544197  215017 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:38:20.544412  215017 config.go:182] Loaded profile config "embed-certs-227235": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:38:20.544464  215017 machine.go:97] duration metric: took 4.26975387s to provisionDockerMachine
	I1119 22:38:20.544486  215017 client.go:176] duration metric: took 11.356016876s to LocalClient.Create
	I1119 22:38:20.544525  215017 start.go:167] duration metric: took 11.356113575s to libmachine.API.Create "embed-certs-227235"
	I1119 22:38:20.544554  215017 start.go:293] postStartSetup for "embed-certs-227235" (driver="docker")
	I1119 22:38:20.544591  215017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:38:20.544678  215017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:38:20.544756  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:20.565300  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:20.667067  215017 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:38:20.670916  215017 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:38:20.670945  215017 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:38:20.670955  215017 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/addons for local assets ...
	I1119 22:38:20.671006  215017 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/files for local assets ...
	I1119 22:38:20.671083  215017 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem -> 41442.pem in /etc/ssl/certs
	I1119 22:38:20.671184  215017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:38:20.680266  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:38:20.699713  215017 start.go:296] duration metric: took 155.103351ms for postStartSetup
	I1119 22:38:20.700150  215017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-227235
	I1119 22:38:20.718277  215017 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/config.json ...
	I1119 22:38:20.718546  215017 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:38:20.718585  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:20.738828  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:20.841296  215017 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:38:20.847214  215017 start.go:128] duration metric: took 11.662337268s to createHost
	I1119 22:38:20.847254  215017 start.go:83] releasing machines lock for "embed-certs-227235", held for 11.662472169s
	I1119 22:38:20.847344  215017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-227235
	I1119 22:38:20.867867  215017 ssh_runner.go:195] Run: cat /version.json
	I1119 22:38:20.867920  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:20.868163  215017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:38:20.868220  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:20.898565  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:20.913281  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:21.018482  215017 ssh_runner.go:195] Run: systemctl --version
	I1119 22:38:21.126924  215017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:38:21.133433  215017 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:38:21.133571  215017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:38:21.174802  215017 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 22:38:21.174882  215017 start.go:496] detecting cgroup driver to use...
	I1119 22:38:21.174939  215017 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 22:38:21.175034  215017 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:38:21.196072  215017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:38:21.213194  215017 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:38:21.213331  215017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:38:21.235649  215017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:38:21.258133  215017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:38:21.407367  215017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:38:21.569958  215017 docker.go:234] disabling docker service ...
	I1119 22:38:21.570075  215017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:38:21.595432  215017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:38:21.609975  215017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:38:21.765673  215017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:38:21.920710  215017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:38:21.936161  215017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:38:21.954615  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 22:38:21.964563  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:38:21.973986  215017 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1119 22:38:21.974106  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1119 22:38:21.983607  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:38:21.993186  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:38:22.003994  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:38:22.014801  215017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:38:22.024224  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:38:22.034441  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:38:22.044428  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:38:22.055950  215017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:38:22.067426  215017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:38:22.076858  215017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:22.269285  215017 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:38:22.431475  215017 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:38:22.431618  215017 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:38:22.438650  215017 start.go:564] Will wait 60s for crictl version
	I1119 22:38:22.438766  215017 ssh_runner.go:195] Run: which crictl
	I1119 22:38:22.442622  215017 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:38:22.484750  215017 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:38:22.484877  215017 ssh_runner.go:195] Run: containerd --version
	I1119 22:38:22.511742  215017 ssh_runner.go:195] Run: containerd --version
	I1119 22:38:22.537445  215017 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 22:38:22.540815  215017 cli_runner.go:164] Run: docker network inspect embed-certs-227235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:38:22.557518  215017 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 22:38:22.561769  215017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:38:22.577497  215017 kubeadm.go:884] updating cluster {Name:embed-certs-227235 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-227235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:38:22.577609  215017 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:38:22.577676  215017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:38:22.612620  215017 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:38:22.612641  215017 containerd.go:534] Images already preloaded, skipping extraction
	I1119 22:38:22.612700  215017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:38:22.639391  215017 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:38:22.639472  215017 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:38:22.639495  215017 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1119 22:38:22.639629  215017 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-227235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-227235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:38:22.639737  215017 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:38:22.675658  215017 cni.go:84] Creating CNI manager for ""
	I1119 22:38:22.675677  215017 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:38:22.675692  215017 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:38:22.675717  215017 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-227235 NodeName:embed-certs-227235 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:38:22.675829  215017 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-227235"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:38:22.675898  215017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:38:22.685785  215017 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:38:22.685854  215017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:38:22.694496  215017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1119 22:38:22.708805  215017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:38:22.723606  215017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1119 22:38:22.738717  215017 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:38:22.742965  215017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:38:22.753270  215017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:22.906872  215017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:38:22.924949  215017 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235 for IP: 192.168.85.2
	I1119 22:38:22.925022  215017 certs.go:195] generating shared ca certs ...
	I1119 22:38:22.925062  215017 certs.go:227] acquiring lock for ca certs: {Name:mk76285c445bf14c1e73dedba3201c9181209ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:22.925256  215017 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key
	I1119 22:38:22.925342  215017 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key
	I1119 22:38:22.925388  215017 certs.go:257] generating profile certs ...
	I1119 22:38:22.925497  215017 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.key
	I1119 22:38:22.925541  215017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.crt with IP's: []
	I1119 22:38:22.060241  213719 out.go:252]   - Booting up control plane ...
	I1119 22:38:22.060350  213719 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:38:22.060434  213719 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:38:22.060504  213719 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:38:22.079017  213719 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:38:22.079368  213719 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:38:22.087584  213719 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:38:22.087933  213719 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:38:22.087982  213719 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:38:22.256548  213719 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:38:22.256676  213719 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:38:23.257718  213719 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001280368s
	I1119 22:38:23.261499  213719 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:38:23.261885  213719 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1119 22:38:23.262185  213719 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:38:23.262436  213719 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:38:23.993413  215017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.crt ...
	I1119 22:38:23.993490  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.crt: {Name:mk9390e430c2adf83fa83c8b0fc6b544e7c6ac73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:23.993723  215017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.key ...
	I1119 22:38:23.993760  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.key: {Name:mkcc129ed7fd3a94daf755b808df5c2ca7b4f55b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:23.993902  215017 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key.9b81cf43
	I1119 22:38:23.993944  215017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt.9b81cf43 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1119 22:38:24.949512  215017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt.9b81cf43 ...
	I1119 22:38:24.949545  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt.9b81cf43: {Name:mk857e8f674694c0bdb694030b2402c50649af7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:24.949819  215017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key.9b81cf43 ...
	I1119 22:38:24.949838  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key.9b81cf43: {Name:mke1e1b8b382f368b842b0b0ebd43fcff825ce2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:24.949968  215017 certs.go:382] copying /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt.9b81cf43 -> /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt
	I1119 22:38:24.950099  215017 certs.go:386] copying /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key.9b81cf43 -> /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key
	I1119 22:38:24.950220  215017 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.key
	I1119 22:38:24.950254  215017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.crt with IP's: []
	I1119 22:38:25.380015  215017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.crt ...
	I1119 22:38:25.380052  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.crt: {Name:mk60463442a2346a7467c65f294d7610875ba798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:25.381096  215017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.key ...
	I1119 22:38:25.381124  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.key: {Name:mkcc9ad63005e92a3409d0552d96d1073c0ab984 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:25.381427  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem (1338 bytes)
	W1119 22:38:25.381505  215017 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144_empty.pem, impossibly tiny 0 bytes
	I1119 22:38:25.381526  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:38:25.381569  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:38:25.381616  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:38:25.381661  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem (1675 bytes)
	I1119 22:38:25.381777  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:38:25.382497  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:38:25.423747  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1119 22:38:25.460637  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:38:25.483373  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:38:25.503061  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 22:38:25.523436  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:38:25.548990  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:38:25.581396  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:38:25.622314  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem --> /usr/share/ca-certificates/4144.pem (1338 bytes)
	I1119 22:38:25.653452  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /usr/share/ca-certificates/41442.pem (1708 bytes)
	I1119 22:38:25.693769  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:38:25.730224  215017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:38:25.757903  215017 ssh_runner.go:195] Run: openssl version
	I1119 22:38:25.770954  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4144.pem && ln -fs /usr/share/ca-certificates/4144.pem /etc/ssl/certs/4144.pem"
	I1119 22:38:25.787344  215017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4144.pem
	I1119 22:38:25.792427  215017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/4144.pem
	I1119 22:38:25.792569  215017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4144.pem
	I1119 22:38:25.854376  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4144.pem /etc/ssl/certs/51391683.0"
	I1119 22:38:25.867349  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41442.pem && ln -fs /usr/share/ca-certificates/41442.pem /etc/ssl/certs/41442.pem"
	I1119 22:38:25.885000  215017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41442.pem
	I1119 22:38:25.895195  215017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/41442.pem
	I1119 22:38:25.895369  215017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41442.pem
	I1119 22:38:25.952771  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41442.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:38:25.969512  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:38:25.988362  215017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:25.994984  215017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:25.995107  215017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:26.054751  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:38:26.081314  215017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:38:26.089485  215017 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:38:26.089616  215017 kubeadm.go:401] StartCluster: {Name:embed-certs-227235 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-227235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:38:26.089729  215017 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:38:26.089883  215017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:38:26.175081  215017 cri.go:89] found id: ""
	I1119 22:38:26.175273  215017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:38:26.201739  215017 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:38:26.213453  215017 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:38:26.213538  215017 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:38:26.227920  215017 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:38:26.227957  215017 kubeadm.go:158] found existing configuration files:
	
	I1119 22:38:26.228016  215017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 22:38:26.238822  215017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:38:26.238956  215017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:38:26.248847  215017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 22:38:26.259874  215017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:38:26.259981  215017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:38:26.269610  215017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 22:38:26.280662  215017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:38:26.280762  215017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:38:26.291067  215017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 22:38:26.299774  215017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:38:26.299863  215017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:38:26.307272  215017 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:38:26.359370  215017 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:38:26.359879  215017 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:38:26.392070  215017 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:38:26.392176  215017 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 22:38:26.392260  215017 kubeadm.go:319] OS: Linux
	I1119 22:38:26.392332  215017 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:38:26.392404  215017 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 22:38:26.392515  215017 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:38:26.392603  215017 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:38:26.392689  215017 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:38:26.392799  215017 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:38:26.392885  215017 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:38:26.392964  215017 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:38:26.393042  215017 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 22:38:26.488613  215017 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:38:26.488982  215017 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:38:26.489119  215017 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:38:26.506528  215017 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:38:26.511504  215017 out.go:252]   - Generating certificates and keys ...
	I1119 22:38:26.511614  215017 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:38:26.511693  215017 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:38:27.434809  215017 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:38:27.852737  215017 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:38:28.219331  215017 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:38:28.667646  215017 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:38:29.503070  215017 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:38:29.503604  215017 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-227235 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:38:29.941520  215017 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:38:29.942072  215017 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-227235 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:38:30.399611  215017 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:38:30.598854  215017 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:38:31.066766  215017 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:38:31.067322  215017 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:38:31.727030  215017 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:38:33.054496  215017 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:38:33.215756  215017 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:38:33.577706  215017 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:38:33.942194  215017 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:38:33.943308  215017 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:38:33.946457  215017 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:38:33.309225  213719 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 10.04648217s
	I1119 22:38:36.096444  213719 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 12.833517484s
	I1119 22:38:37.264214  213719 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 14.001762391s
	I1119 22:38:37.296022  213719 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:38:37.335127  213719 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:38:37.354913  213719 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:38:37.355423  213719 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-570856 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:38:37.372044  213719 kubeadm.go:319] [bootstrap-token] Using token: r8vw8k.tssokqfhghfm62o1
	I1119 22:38:33.949816  215017 out.go:252]   - Booting up control plane ...
	I1119 22:38:33.949930  215017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:38:33.950028  215017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:38:33.951280  215017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:38:33.979582  215017 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:38:33.979702  215017 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:38:33.992539  215017 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:38:33.992652  215017 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:38:33.992697  215017 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:38:34.209173  215017 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:38:34.209304  215017 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:38:35.710488  215017 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501663596s
	I1119 22:38:35.713801  215017 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:38:35.714133  215017 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1119 22:38:35.714829  215017 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:38:35.715359  215017 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:38:37.374987  213719 out.go:252]   - Configuring RBAC rules ...
	I1119 22:38:37.375116  213719 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:38:37.383216  213719 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:38:37.395526  213719 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:38:37.407816  213719 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:38:37.414859  213719 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:38:37.420042  213719 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:38:37.672205  213719 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:38:38.187591  213719 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:38:38.676130  213719 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:38:38.677635  213719 kubeadm.go:319] 
	I1119 22:38:38.677723  213719 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:38:38.677730  213719 kubeadm.go:319] 
	I1119 22:38:38.677810  213719 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:38:38.677815  213719 kubeadm.go:319] 
	I1119 22:38:38.677841  213719 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:38:38.678403  213719 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:38:38.678471  213719 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:38:38.678477  213719 kubeadm.go:319] 
	I1119 22:38:38.678533  213719 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:38:38.678538  213719 kubeadm.go:319] 
	I1119 22:38:38.678587  213719 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:38:38.678591  213719 kubeadm.go:319] 
	I1119 22:38:38.678645  213719 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:38:38.678746  213719 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:38:38.678817  213719 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:38:38.678822  213719 kubeadm.go:319] 
	I1119 22:38:38.679193  213719 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:38:38.679286  213719 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:38:38.679291  213719 kubeadm.go:319] 
	I1119 22:38:38.679572  213719 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token r8vw8k.tssokqfhghfm62o1 \
	I1119 22:38:38.679686  213719 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f3dc8233c963d7fa33b7a72da6102de3e0dbc1bf6e99b77f8426922389e565f9 \
	I1119 22:38:38.690497  213719 kubeadm.go:319] 	--control-plane 
	I1119 22:38:38.690515  213719 kubeadm.go:319] 
	I1119 22:38:38.690863  213719 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:38:38.690881  213719 kubeadm.go:319] 
	I1119 22:38:38.691192  213719 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token r8vw8k.tssokqfhghfm62o1 \
	I1119 22:38:38.691498  213719 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f3dc8233c963d7fa33b7a72da6102de3e0dbc1bf6e99b77f8426922389e565f9 
	I1119 22:38:38.710307  213719 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 22:38:38.710544  213719 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 22:38:38.710653  213719 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:38:38.710672  213719 cni.go:84] Creating CNI manager for ""
	I1119 22:38:38.710679  213719 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:38:38.713840  213719 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:38:38.716961  213719 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:38:38.736887  213719 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:38:38.736905  213719 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:38:38.789317  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:38:39.400153  213719 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:38:39.400321  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:39.400530  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-570856 minikube.k8s.io/updated_at=2025_11_19T22_38_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=default-k8s-diff-port-570856 minikube.k8s.io/primary=true
	I1119 22:38:39.975271  213719 ops.go:34] apiserver oom_adj: -16
	I1119 22:38:39.975391  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:40.475885  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:40.976254  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:41.475492  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:41.975953  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:42.476216  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:42.976019  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:43.476374  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:43.938571  213719 kubeadm.go:1114] duration metric: took 4.538317084s to wait for elevateKubeSystemPrivileges
	I1119 22:38:43.938601  213719 kubeadm.go:403] duration metric: took 29.168610658s to StartCluster
	I1119 22:38:43.938617  213719 settings.go:142] acquiring lock: {Name:mk5c8f7d46662d574c7e53cf7b09709855a1e14f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:43.938675  213719 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:38:43.939379  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/kubeconfig: {Name:mk670f88d9cb1be22f05f7db4ddcfb97af791e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:43.939602  213719 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:38:43.939699  213719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:38:43.939950  213719 config.go:182] Loaded profile config "default-k8s-diff-port-570856": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:38:43.939984  213719 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:38:43.940039  213719 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-570856"
	I1119 22:38:43.940056  213719 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-570856"
	I1119 22:38:43.940077  213719 host.go:66] Checking if "default-k8s-diff-port-570856" exists ...
	I1119 22:38:43.940595  213719 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-570856"
	I1119 22:38:43.940614  213719 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-570856"
	I1119 22:38:43.940913  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:43.941163  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:43.943262  213719 out.go:179] * Verifying Kubernetes components...
	I1119 22:38:43.946436  213719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:43.988827  213719 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:38:43.992407  213719 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:38:43.992429  213719 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:38:43.992505  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:44.003465  213719 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-570856"
	I1119 22:38:44.003510  213719 host.go:66] Checking if "default-k8s-diff-port-570856" exists ...
	I1119 22:38:44.003968  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:44.031387  213719 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:38:44.031407  213719 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:38:44.031480  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:44.054335  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:44.071105  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:44.576022  213719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:38:44.576179  213719 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:38:44.632284  213719 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:38:44.830916  213719 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:38:45.842317  213719 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.266107104s)
	I1119 22:38:45.843122  213719 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-570856" to be "Ready" ...
	I1119 22:38:45.843439  213719 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.267383122s)
	I1119 22:38:45.843467  213719 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 22:38:45.844308  213719 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.21199704s)
	I1119 22:38:46.281571  213719 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.450606827s)
	I1119 22:38:46.284845  213719 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1119 22:38:46.287763  213719 addons.go:515] duration metric: took 2.347755369s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1119 22:38:46.347624  213719 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-570856" context rescaled to 1 replicas
	I1119 22:38:44.428112  215017 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 8.712323929s
	I1119 22:38:45.320373  215017 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.604203465s
	I1119 22:38:46.717967  215017 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.003347835s
	I1119 22:38:46.741715  215017 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:38:46.757144  215017 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:38:46.772462  215017 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:38:46.772924  215017 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-227235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:38:46.785381  215017 kubeadm.go:319] [bootstrap-token] Using token: ocom7o.y2g4phnwe8gpvos5
	I1119 22:38:46.788355  215017 out.go:252]   - Configuring RBAC rules ...
	I1119 22:38:46.788494  215017 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:38:46.793683  215017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:38:46.802650  215017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:38:46.811439  215017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:38:46.816154  215017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:38:46.823297  215017 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:38:47.128653  215017 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:38:47.591010  215017 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:38:48.125064  215017 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:38:48.126191  215017 kubeadm.go:319] 
	I1119 22:38:48.126264  215017 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:38:48.126270  215017 kubeadm.go:319] 
	I1119 22:38:48.126346  215017 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:38:48.126350  215017 kubeadm.go:319] 
	I1119 22:38:48.126376  215017 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:38:48.126445  215017 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:38:48.126502  215017 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:38:48.126506  215017 kubeadm.go:319] 
	I1119 22:38:48.126560  215017 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:38:48.126564  215017 kubeadm.go:319] 
	I1119 22:38:48.126611  215017 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:38:48.126618  215017 kubeadm.go:319] 
	I1119 22:38:48.126669  215017 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:38:48.126743  215017 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:38:48.126818  215017 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:38:48.126826  215017 kubeadm.go:319] 
	I1119 22:38:48.126910  215017 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:38:48.126985  215017 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:38:48.126989  215017 kubeadm.go:319] 
	I1119 22:38:48.127072  215017 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ocom7o.y2g4phnwe8gpvos5 \
	I1119 22:38:48.127175  215017 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f3dc8233c963d7fa33b7a72da6102de3e0dbc1bf6e99b77f8426922389e565f9 \
	I1119 22:38:48.127195  215017 kubeadm.go:319] 	--control-plane 
	I1119 22:38:48.127200  215017 kubeadm.go:319] 
	I1119 22:38:48.127283  215017 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:38:48.127287  215017 kubeadm.go:319] 
	I1119 22:38:48.127368  215017 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ocom7o.y2g4phnwe8gpvos5 \
	I1119 22:38:48.127478  215017 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f3dc8233c963d7fa33b7a72da6102de3e0dbc1bf6e99b77f8426922389e565f9 
	I1119 22:38:48.131460  215017 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 22:38:48.131800  215017 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 22:38:48.131963  215017 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:38:48.132002  215017 cni.go:84] Creating CNI manager for ""
	I1119 22:38:48.132025  215017 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:38:48.135396  215017 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:38:48.138681  215017 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:38:48.143238  215017 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:38:48.143261  215017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:38:48.157842  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:38:48.509463  215017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:38:48.509605  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:48.509695  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-227235 minikube.k8s.io/updated_at=2025_11_19T22_38_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=embed-certs-227235 minikube.k8s.io/primary=true
	I1119 22:38:48.531347  215017 ops.go:34] apiserver oom_adj: -16
	W1119 22:38:47.847437  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:38:50.346251  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	I1119 22:38:48.707714  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:49.208479  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:49.708331  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:50.207957  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:50.708351  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:51.208551  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:51.707874  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:52.208750  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:52.708197  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:52.870444  215017 kubeadm.go:1114] duration metric: took 4.360885722s to wait for elevateKubeSystemPrivileges
	I1119 22:38:52.870476  215017 kubeadm.go:403] duration metric: took 26.780891514s to StartCluster
	I1119 22:38:52.870495  215017 settings.go:142] acquiring lock: {Name:mk5c8f7d46662d574c7e53cf7b09709855a1e14f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:52.870563  215017 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:38:52.871877  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/kubeconfig: {Name:mk670f88d9cb1be22f05f7db4ddcfb97af791e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:52.872086  215017 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:38:52.872205  215017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:38:52.872510  215017 config.go:182] Loaded profile config "embed-certs-227235": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:38:52.872559  215017 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:38:52.872623  215017 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-227235"
	I1119 22:38:52.872642  215017 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-227235"
	I1119 22:38:52.872666  215017 host.go:66] Checking if "embed-certs-227235" exists ...
	I1119 22:38:52.873151  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:52.873401  215017 addons.go:70] Setting default-storageclass=true in profile "embed-certs-227235"
	I1119 22:38:52.873423  215017 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-227235"
	I1119 22:38:52.873686  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:52.875844  215017 out.go:179] * Verifying Kubernetes components...
	I1119 22:38:52.879063  215017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:52.907006  215017 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:38:52.909996  215017 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:38:52.910022  215017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:38:52.910096  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:52.917662  215017 addons.go:239] Setting addon default-storageclass=true in "embed-certs-227235"
	I1119 22:38:52.917721  215017 host.go:66] Checking if "embed-certs-227235" exists ...
	I1119 22:38:52.918300  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:52.944204  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:52.957685  215017 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:38:52.957706  215017 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:38:52.957769  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:52.993629  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:53.201073  215017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:38:53.201195  215017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:38:53.314355  215017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:38:53.327779  215017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:38:53.841120  215017 node_ready.go:35] waiting up to 6m0s for node "embed-certs-227235" to be "Ready" ...
	I1119 22:38:53.841457  215017 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1119 22:38:54.280299  215017 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1119 22:38:52.346734  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:38:54.347319  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	I1119 22:38:54.283209  215017 addons.go:515] duration metric: took 1.410633606s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1119 22:38:54.349594  215017 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-227235" context rescaled to 1 replicas
	W1119 22:38:55.844628  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:38:58.344650  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:38:56.846106  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:38:58.846730  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:00.846861  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:00.347351  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:02.844246  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:02.847116  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:05.346461  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:04.845042  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:07.345010  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:07.347215  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:09.846094  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:09.345198  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:11.346411  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:11.846299  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:13.846861  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:16.347393  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:13.844623  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:16.344779  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:18.345372  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:18.846715  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:21.346432  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:20.347964  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:22.843854  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:23.846693  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	I1119 22:39:25.847621  213719 node_ready.go:49] node "default-k8s-diff-port-570856" is "Ready"
	I1119 22:39:25.847652  213719 node_ready.go:38] duration metric: took 40.004497931s for node "default-k8s-diff-port-570856" to be "Ready" ...
	I1119 22:39:25.847666  213719 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:39:25.847724  213719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:39:25.874926  213719 api_server.go:72] duration metric: took 41.935286387s to wait for apiserver process to appear ...
	I1119 22:39:25.874949  213719 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:39:25.874968  213719 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1119 22:39:25.885461  213719 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1119 22:39:25.887414  213719 api_server.go:141] control plane version: v1.34.1
	I1119 22:39:25.887438  213719 api_server.go:131] duration metric: took 12.482962ms to wait for apiserver health ...
	I1119 22:39:25.887448  213719 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:39:25.891159  213719 system_pods.go:59] 8 kube-system pods found
	I1119 22:39:25.891193  213719 system_pods.go:61] "coredns-66bc5c9577-4m8f2" [92627362-0048-4b1a-af4e-7f9d8c53a483] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:25.891200  213719 system_pods.go:61] "etcd-default-k8s-diff-port-570856" [10367870-e3a1-47eb-b3c4-aaa86bcd75fb] Running
	I1119 22:39:25.891207  213719 system_pods.go:61] "kindnet-n8jjs" [f07057ba-2012-4291-ba43-a3638f7c8c58] Running
	I1119 22:39:25.891212  213719 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-570856" [1f655ad0-d00d-452c-84c6-91797dbb8246] Running
	I1119 22:39:25.891217  213719 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-570856" [e70d16a0-455c-4f9d-860d-60b21038f6e6] Running
	I1119 22:39:25.891221  213719 system_pods.go:61] "kube-proxy-n4868" [965b5310-35e9-4026-91b4-733b3eef9088] Running
	I1119 22:39:25.891226  213719 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-570856" [82db77c8-08a3-4917-8b17-c73717e426e2] Running
	I1119 22:39:25.891231  213719 system_pods.go:61] "storage-provisioner" [2339c18e-d677-4777-b9a8-1df877bb86be] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:25.891238  213719 system_pods.go:74] duration metric: took 3.784369ms to wait for pod list to return data ...
	I1119 22:39:25.891248  213719 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:39:25.894907  213719 default_sa.go:45] found service account: "default"
	I1119 22:39:25.894971  213719 default_sa.go:55] duration metric: took 3.716182ms for default service account to be created ...
	I1119 22:39:25.894995  213719 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:39:25.898958  213719 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:25.899042  213719 system_pods.go:89] "coredns-66bc5c9577-4m8f2" [92627362-0048-4b1a-af4e-7f9d8c53a483] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:25.899064  213719 system_pods.go:89] "etcd-default-k8s-diff-port-570856" [10367870-e3a1-47eb-b3c4-aaa86bcd75fb] Running
	I1119 22:39:25.899105  213719 system_pods.go:89] "kindnet-n8jjs" [f07057ba-2012-4291-ba43-a3638f7c8c58] Running
	I1119 22:39:25.899128  213719 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-570856" [1f655ad0-d00d-452c-84c6-91797dbb8246] Running
	I1119 22:39:25.899147  213719 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-570856" [e70d16a0-455c-4f9d-860d-60b21038f6e6] Running
	I1119 22:39:25.899170  213719 system_pods.go:89] "kube-proxy-n4868" [965b5310-35e9-4026-91b4-733b3eef9088] Running
	I1119 22:39:25.899190  213719 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-570856" [82db77c8-08a3-4917-8b17-c73717e426e2] Running
	I1119 22:39:25.899259  213719 system_pods.go:89] "storage-provisioner" [2339c18e-d677-4777-b9a8-1df877bb86be] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:25.899299  213719 retry.go:31] will retry after 294.705373ms: missing components: kube-dns
	I1119 22:39:26.198486  213719 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:26.198523  213719 system_pods.go:89] "coredns-66bc5c9577-4m8f2" [92627362-0048-4b1a-af4e-7f9d8c53a483] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:26.198531  213719 system_pods.go:89] "etcd-default-k8s-diff-port-570856" [10367870-e3a1-47eb-b3c4-aaa86bcd75fb] Running
	I1119 22:39:26.198541  213719 system_pods.go:89] "kindnet-n8jjs" [f07057ba-2012-4291-ba43-a3638f7c8c58] Running
	I1119 22:39:26.198546  213719 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-570856" [1f655ad0-d00d-452c-84c6-91797dbb8246] Running
	I1119 22:39:26.198552  213719 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-570856" [e70d16a0-455c-4f9d-860d-60b21038f6e6] Running
	I1119 22:39:26.198556  213719 system_pods.go:89] "kube-proxy-n4868" [965b5310-35e9-4026-91b4-733b3eef9088] Running
	I1119 22:39:26.198561  213719 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-570856" [82db77c8-08a3-4917-8b17-c73717e426e2] Running
	I1119 22:39:26.198566  213719 system_pods.go:89] "storage-provisioner" [2339c18e-d677-4777-b9a8-1df877bb86be] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:26.198584  213719 retry.go:31] will retry after 303.182095ms: missing components: kube-dns
	I1119 22:39:26.506554  213719 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:26.506591  213719 system_pods.go:89] "coredns-66bc5c9577-4m8f2" [92627362-0048-4b1a-af4e-7f9d8c53a483] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:26.506598  213719 system_pods.go:89] "etcd-default-k8s-diff-port-570856" [10367870-e3a1-47eb-b3c4-aaa86bcd75fb] Running
	I1119 22:39:26.506604  213719 system_pods.go:89] "kindnet-n8jjs" [f07057ba-2012-4291-ba43-a3638f7c8c58] Running
	I1119 22:39:26.506608  213719 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-570856" [1f655ad0-d00d-452c-84c6-91797dbb8246] Running
	I1119 22:39:26.506613  213719 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-570856" [e70d16a0-455c-4f9d-860d-60b21038f6e6] Running
	I1119 22:39:26.506618  213719 system_pods.go:89] "kube-proxy-n4868" [965b5310-35e9-4026-91b4-733b3eef9088] Running
	I1119 22:39:26.506622  213719 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-570856" [82db77c8-08a3-4917-8b17-c73717e426e2] Running
	I1119 22:39:26.506627  213719 system_pods.go:89] "storage-provisioner" [2339c18e-d677-4777-b9a8-1df877bb86be] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:26.506647  213719 retry.go:31] will retry after 472.574028ms: missing components: kube-dns
	I1119 22:39:26.984178  213719 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:26.984212  213719 system_pods.go:89] "coredns-66bc5c9577-4m8f2" [92627362-0048-4b1a-af4e-7f9d8c53a483] Running
	I1119 22:39:26.984220  213719 system_pods.go:89] "etcd-default-k8s-diff-port-570856" [10367870-e3a1-47eb-b3c4-aaa86bcd75fb] Running
	I1119 22:39:26.984226  213719 system_pods.go:89] "kindnet-n8jjs" [f07057ba-2012-4291-ba43-a3638f7c8c58] Running
	I1119 22:39:26.984231  213719 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-570856" [1f655ad0-d00d-452c-84c6-91797dbb8246] Running
	I1119 22:39:26.984235  213719 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-570856" [e70d16a0-455c-4f9d-860d-60b21038f6e6] Running
	I1119 22:39:26.984239  213719 system_pods.go:89] "kube-proxy-n4868" [965b5310-35e9-4026-91b4-733b3eef9088] Running
	I1119 22:39:26.984243  213719 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-570856" [82db77c8-08a3-4917-8b17-c73717e426e2] Running
	I1119 22:39:26.984247  213719 system_pods.go:89] "storage-provisioner" [2339c18e-d677-4777-b9a8-1df877bb86be] Running
	I1119 22:39:26.984255  213719 system_pods.go:126] duration metric: took 1.089240935s to wait for k8s-apps to be running ...
	I1119 22:39:26.984269  213719 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:39:26.984329  213719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:39:26.998904  213719 system_svc.go:56] duration metric: took 14.6234ms WaitForService to wait for kubelet
	I1119 22:39:26.998932  213719 kubeadm.go:587] duration metric: took 43.05929861s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:39:26.998953  213719 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:39:27.002787  213719 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:39:27.003037  213719 node_conditions.go:123] node cpu capacity is 2
	I1119 22:39:27.003065  213719 node_conditions.go:105] duration metric: took 4.106062ms to run NodePressure ...
	I1119 22:39:27.003081  213719 start.go:242] waiting for startup goroutines ...
	I1119 22:39:27.003095  213719 start.go:247] waiting for cluster config update ...
	I1119 22:39:27.003112  213719 start.go:256] writing updated cluster config ...
	I1119 22:39:27.003490  213719 ssh_runner.go:195] Run: rm -f paused
	I1119 22:39:27.008294  213719 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:39:27.012665  213719 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4m8f2" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.017786  213719 pod_ready.go:94] pod "coredns-66bc5c9577-4m8f2" is "Ready"
	I1119 22:39:27.017812  213719 pod_ready.go:86] duration metric: took 5.121391ms for pod "coredns-66bc5c9577-4m8f2" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.020648  213719 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.025936  213719 pod_ready.go:94] pod "etcd-default-k8s-diff-port-570856" is "Ready"
	I1119 22:39:27.026011  213719 pod_ready.go:86] duration metric: took 5.321771ms for pod "etcd-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.028977  213719 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.034047  213719 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-570856" is "Ready"
	I1119 22:39:27.034073  213719 pod_ready.go:86] duration metric: took 5.070216ms for pod "kube-apiserver-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.036706  213719 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.413085  213719 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-570856" is "Ready"
	I1119 22:39:27.413111  213719 pod_ready.go:86] duration metric: took 376.376792ms for pod "kube-controller-manager-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.613330  213719 pod_ready.go:83] waiting for pod "kube-proxy-n4868" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:28.012703  213719 pod_ready.go:94] pod "kube-proxy-n4868" is "Ready"
	I1119 22:39:28.012745  213719 pod_ready.go:86] duration metric: took 399.33038ms for pod "kube-proxy-n4868" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:28.213996  213719 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:28.613271  213719 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-570856" is "Ready"
	I1119 22:39:28.613305  213719 pod_ready.go:86] duration metric: took 399.283191ms for pod "kube-scheduler-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:28.613319  213719 pod_ready.go:40] duration metric: took 1.604992351s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:39:28.668463  213719 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 22:39:28.671810  213719 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-570856" cluster and "default" namespace by default
	W1119 22:39:24.844923  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:26.845154  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:29.344473  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:31.844696  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	I1119 22:39:34.345023  215017 node_ready.go:49] node "embed-certs-227235" is "Ready"
	I1119 22:39:34.345048  215017 node_ready.go:38] duration metric: took 40.503896306s for node "embed-certs-227235" to be "Ready" ...
	I1119 22:39:34.345063  215017 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:39:34.345119  215017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:39:34.362404  215017 api_server.go:72] duration metric: took 41.490288995s to wait for apiserver process to appear ...
	I1119 22:39:34.362426  215017 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:39:34.362445  215017 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:39:34.390640  215017 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 22:39:34.392448  215017 api_server.go:141] control plane version: v1.34.1
	I1119 22:39:34.392508  215017 api_server.go:131] duration metric: took 30.073646ms to wait for apiserver health ...
	I1119 22:39:34.392532  215017 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:39:34.400782  215017 system_pods.go:59] 8 kube-system pods found
	I1119 22:39:34.400862  215017 system_pods.go:61] "coredns-66bc5c9577-6xhjj" [dae34df3-583b-4539-a4d6-78240466e86c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:34.400885  215017 system_pods.go:61] "etcd-embed-certs-227235" [11a732b8-a65d-4a13-8c9f-69b9193419b9] Running
	I1119 22:39:34.400909  215017 system_pods.go:61] "kindnet-v7ws4" [b8f6ea6e-c156-4ce9-9c71-0057f87a1be5] Running
	I1119 22:39:34.400930  215017 system_pods.go:61] "kube-apiserver-embed-certs-227235" [90d0f81c-a22b-4d9a-b5e3-d3b783b345e8] Running
	I1119 22:39:34.400951  215017 system_pods.go:61] "kube-controller-manager-embed-certs-227235" [86f2943e-80a0-4bfc-8764-a48560ccdad9] Running
	I1119 22:39:34.400973  215017 system_pods.go:61] "kube-proxy-plgtr" [6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4] Running
	I1119 22:39:34.400994  215017 system_pods.go:61] "kube-scheduler-embed-certs-227235" [194cd323-d8f6-4a18-9990-931bff9d0b49] Running
	I1119 22:39:34.401017  215017 system_pods.go:61] "storage-provisioner" [dad399ee-80b6-4c16-bed2-296586a544b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:34.401041  215017 system_pods.go:74] duration metric: took 8.489033ms to wait for pod list to return data ...
	I1119 22:39:34.401063  215017 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:39:34.404927  215017 default_sa.go:45] found service account: "default"
	I1119 22:39:34.404991  215017 default_sa.go:55] duration metric: took 3.906002ms for default service account to be created ...
	I1119 22:39:34.405016  215017 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:39:34.408626  215017 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:34.408709  215017 system_pods.go:89] "coredns-66bc5c9577-6xhjj" [dae34df3-583b-4539-a4d6-78240466e86c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:34.408731  215017 system_pods.go:89] "etcd-embed-certs-227235" [11a732b8-a65d-4a13-8c9f-69b9193419b9] Running
	I1119 22:39:34.408754  215017 system_pods.go:89] "kindnet-v7ws4" [b8f6ea6e-c156-4ce9-9c71-0057f87a1be5] Running
	I1119 22:39:34.408780  215017 system_pods.go:89] "kube-apiserver-embed-certs-227235" [90d0f81c-a22b-4d9a-b5e3-d3b783b345e8] Running
	I1119 22:39:34.408803  215017 system_pods.go:89] "kube-controller-manager-embed-certs-227235" [86f2943e-80a0-4bfc-8764-a48560ccdad9] Running
	I1119 22:39:34.408827  215017 system_pods.go:89] "kube-proxy-plgtr" [6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4] Running
	I1119 22:39:34.408848  215017 system_pods.go:89] "kube-scheduler-embed-certs-227235" [194cd323-d8f6-4a18-9990-931bff9d0b49] Running
	I1119 22:39:34.408881  215017 system_pods.go:89] "storage-provisioner" [dad399ee-80b6-4c16-bed2-296586a544b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:34.408920  215017 retry.go:31] will retry after 270.078819ms: missing components: kube-dns
	I1119 22:39:34.682801  215017 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:34.682906  215017 system_pods.go:89] "coredns-66bc5c9577-6xhjj" [dae34df3-583b-4539-a4d6-78240466e86c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:34.682929  215017 system_pods.go:89] "etcd-embed-certs-227235" [11a732b8-a65d-4a13-8c9f-69b9193419b9] Running
	I1119 22:39:34.682965  215017 system_pods.go:89] "kindnet-v7ws4" [b8f6ea6e-c156-4ce9-9c71-0057f87a1be5] Running
	I1119 22:39:34.682988  215017 system_pods.go:89] "kube-apiserver-embed-certs-227235" [90d0f81c-a22b-4d9a-b5e3-d3b783b345e8] Running
	I1119 22:39:34.683010  215017 system_pods.go:89] "kube-controller-manager-embed-certs-227235" [86f2943e-80a0-4bfc-8764-a48560ccdad9] Running
	I1119 22:39:34.683041  215017 system_pods.go:89] "kube-proxy-plgtr" [6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4] Running
	I1119 22:39:34.683064  215017 system_pods.go:89] "kube-scheduler-embed-certs-227235" [194cd323-d8f6-4a18-9990-931bff9d0b49] Running
	I1119 22:39:34.683087  215017 system_pods.go:89] "storage-provisioner" [dad399ee-80b6-4c16-bed2-296586a544b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:34.683118  215017 retry.go:31] will retry after 271.259245ms: missing components: kube-dns
	I1119 22:39:34.958505  215017 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:34.958539  215017 system_pods.go:89] "coredns-66bc5c9577-6xhjj" [dae34df3-583b-4539-a4d6-78240466e86c] Running
	I1119 22:39:34.958547  215017 system_pods.go:89] "etcd-embed-certs-227235" [11a732b8-a65d-4a13-8c9f-69b9193419b9] Running
	I1119 22:39:34.958551  215017 system_pods.go:89] "kindnet-v7ws4" [b8f6ea6e-c156-4ce9-9c71-0057f87a1be5] Running
	I1119 22:39:34.958557  215017 system_pods.go:89] "kube-apiserver-embed-certs-227235" [90d0f81c-a22b-4d9a-b5e3-d3b783b345e8] Running
	I1119 22:39:34.958584  215017 system_pods.go:89] "kube-controller-manager-embed-certs-227235" [86f2943e-80a0-4bfc-8764-a48560ccdad9] Running
	I1119 22:39:34.958595  215017 system_pods.go:89] "kube-proxy-plgtr" [6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4] Running
	I1119 22:39:34.958600  215017 system_pods.go:89] "kube-scheduler-embed-certs-227235" [194cd323-d8f6-4a18-9990-931bff9d0b49] Running
	I1119 22:39:34.958603  215017 system_pods.go:89] "storage-provisioner" [dad399ee-80b6-4c16-bed2-296586a544b5] Running
	I1119 22:39:34.958612  215017 system_pods.go:126] duration metric: took 553.576677ms to wait for k8s-apps to be running ...
	I1119 22:39:34.958625  215017 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:39:34.958694  215017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:39:34.972706  215017 system_svc.go:56] duration metric: took 14.071483ms WaitForService to wait for kubelet
	I1119 22:39:34.972778  215017 kubeadm.go:587] duration metric: took 42.100669257s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:39:34.972814  215017 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:39:34.975990  215017 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:39:34.976072  215017 node_conditions.go:123] node cpu capacity is 2
	I1119 22:39:34.976093  215017 node_conditions.go:105] duration metric: took 3.255435ms to run NodePressure ...
	I1119 22:39:34.976107  215017 start.go:242] waiting for startup goroutines ...
	I1119 22:39:34.976115  215017 start.go:247] waiting for cluster config update ...
	I1119 22:39:34.976126  215017 start.go:256] writing updated cluster config ...
	I1119 22:39:34.976427  215017 ssh_runner.go:195] Run: rm -f paused
	I1119 22:39:34.980344  215017 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:39:34.985616  215017 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6xhjj" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:34.991603  215017 pod_ready.go:94] pod "coredns-66bc5c9577-6xhjj" is "Ready"
	I1119 22:39:34.991644  215017 pod_ready.go:86] duration metric: took 5.99596ms for pod "coredns-66bc5c9577-6xhjj" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:34.994018  215017 pod_ready.go:83] waiting for pod "etcd-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.003190  215017 pod_ready.go:94] pod "etcd-embed-certs-227235" is "Ready"
	I1119 22:39:35.003274  215017 pod_ready.go:86] duration metric: took 9.230481ms for pod "etcd-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.007638  215017 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.016450  215017 pod_ready.go:94] pod "kube-apiserver-embed-certs-227235" is "Ready"
	I1119 22:39:35.016480  215017 pod_ready.go:86] duration metric: took 8.80742ms for pod "kube-apiserver-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.019656  215017 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.385673  215017 pod_ready.go:94] pod "kube-controller-manager-embed-certs-227235" is "Ready"
	I1119 22:39:35.385700  215017 pod_ready.go:86] duration metric: took 365.999627ms for pod "kube-controller-manager-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.584880  215017 pod_ready.go:83] waiting for pod "kube-proxy-plgtr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.984356  215017 pod_ready.go:94] pod "kube-proxy-plgtr" is "Ready"
	I1119 22:39:35.984391  215017 pod_ready.go:86] duration metric: took 399.485083ms for pod "kube-proxy-plgtr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:36.185075  215017 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:36.585576  215017 pod_ready.go:94] pod "kube-scheduler-embed-certs-227235" is "Ready"
	I1119 22:39:36.585603  215017 pod_ready.go:86] duration metric: took 400.501535ms for pod "kube-scheduler-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:36.585617  215017 pod_ready.go:40] duration metric: took 1.605197997s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:39:36.654842  215017 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 22:39:36.659599  215017 out.go:179] * Done! kubectl is now configured to use "embed-certs-227235" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	74e3c4a4051a2       1611cd07b61d5       6 seconds ago        Running             busybox                   0                   1a3da71ef5cbb       busybox                                                default
	3da642e62162f       138784d87c9c5       12 seconds ago       Running             coredns                   0                   ce790582b535e       coredns-66bc5c9577-4m8f2                               kube-system
	ac19323559deb       ba04bb24b9575       12 seconds ago       Running             storage-provisioner       0                   ddab1664cb1b4       storage-provisioner                                    kube-system
	5d9cf5103ba44       05baa95f5142d       53 seconds ago       Running             kube-proxy                0                   dc1d0407b897c       kube-proxy-n4868                                       kube-system
	2644752343f75       b1a8c6f707935       53 seconds ago       Running             kindnet-cni               0                   3be3aa964521e       kindnet-n8jjs                                          kube-system
	829c562f0f222       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   58e7f16f559de       kube-apiserver-default-k8s-diff-port-570856            kube-system
	e4c4039c8a727       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   be66d6bc890de       kube-scheduler-default-k8s-diff-port-570856            kube-system
	7036e1f00cb91       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   78cab9510dd24       kube-controller-manager-default-k8s-diff-port-570856   kube-system
	7d268decdd0d9       a1894772a478e       About a minute ago   Running             etcd                      0                   1f7b11105786b       etcd-default-k8s-diff-port-570856                      kube-system
	
	
	==> containerd <==
	Nov 19 22:39:25 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:25.901443418Z" level=info msg="connecting to shim ac19323559deb019c92d46623f8f93f141457384cef6ce6e8a9841354bf572f9" address="unix:///run/containerd/s/9a1b16d324b9a671f85f1750ce7f5bb69063a867b33c34598f859921a917a0e3" protocol=ttrpc version=3
	Nov 19 22:39:25 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:25.918541527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4m8f2,Uid:92627362-0048-4b1a-af4e-7f9d8c53a483,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce790582b535e513887bddf96766a9a8ecfd6e0197d7ca84cbf1822f125bf5b1\""
	Nov 19 22:39:25 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:25.929815419Z" level=info msg="CreateContainer within sandbox \"ce790582b535e513887bddf96766a9a8ecfd6e0197d7ca84cbf1822f125bf5b1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 19 22:39:25 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:25.941110193Z" level=info msg="Container 3da642e62162f3b53ab9cca81c09853112a192a439b6cab3c5047ef0a7f63b04: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:39:25 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:25.952176697Z" level=info msg="CreateContainer within sandbox \"ce790582b535e513887bddf96766a9a8ecfd6e0197d7ca84cbf1822f125bf5b1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3da642e62162f3b53ab9cca81c09853112a192a439b6cab3c5047ef0a7f63b04\""
	Nov 19 22:39:25 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:25.953018019Z" level=info msg="StartContainer for \"3da642e62162f3b53ab9cca81c09853112a192a439b6cab3c5047ef0a7f63b04\""
	Nov 19 22:39:25 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:25.954053862Z" level=info msg="connecting to shim 3da642e62162f3b53ab9cca81c09853112a192a439b6cab3c5047ef0a7f63b04" address="unix:///run/containerd/s/c954935da72e80067f478974bf94d1c0e8514a06f70ad40469e0d1a929a88edc" protocol=ttrpc version=3
	Nov 19 22:39:26 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:26.005099021Z" level=info msg="StartContainer for \"ac19323559deb019c92d46623f8f93f141457384cef6ce6e8a9841354bf572f9\" returns successfully"
	Nov 19 22:39:26 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:26.049770008Z" level=info msg="StartContainer for \"3da642e62162f3b53ab9cca81c09853112a192a439b6cab3c5047ef0a7f63b04\" returns successfully"
	Nov 19 22:39:29 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:29.251702501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:7195bbcd-aea0-4b92-b3d2-0e76651191f2,Namespace:default,Attempt:0,}"
	Nov 19 22:39:29 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:29.315085562Z" level=info msg="connecting to shim 1a3da71ef5cbb9437982964f37ad518852f9a8f293e918e817ac128904429709" address="unix:///run/containerd/s/050542b8bd9ad20d514db97fd26aa611141a11e653957fe3d3f85227a6c095b1" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 22:39:29 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:29.392832278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:7195bbcd-aea0-4b92-b3d2-0e76651191f2,Namespace:default,Attempt:0,} returns sandbox id \"1a3da71ef5cbb9437982964f37ad518852f9a8f293e918e817ac128904429709\""
	Nov 19 22:39:29 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:29.397479646Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.542850807Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.544740526Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937188"
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.547404742Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.551765469Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.552307891Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.15462186s"
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.552358025Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.560427616Z" level=info msg="CreateContainer within sandbox \"1a3da71ef5cbb9437982964f37ad518852f9a8f293e918e817ac128904429709\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.578433180Z" level=info msg="Container 74e3c4a4051a25d4276374e92c83daaa0fe5a861a1520699792bcdb502865953: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.589877904Z" level=info msg="CreateContainer within sandbox \"1a3da71ef5cbb9437982964f37ad518852f9a8f293e918e817ac128904429709\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"74e3c4a4051a25d4276374e92c83daaa0fe5a861a1520699792bcdb502865953\""
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.591965631Z" level=info msg="StartContainer for \"74e3c4a4051a25d4276374e92c83daaa0fe5a861a1520699792bcdb502865953\""
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.592950864Z" level=info msg="connecting to shim 74e3c4a4051a25d4276374e92c83daaa0fe5a861a1520699792bcdb502865953" address="unix:///run/containerd/s/050542b8bd9ad20d514db97fd26aa611141a11e653957fe3d3f85227a6c095b1" protocol=ttrpc version=3
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.655026562Z" level=info msg="StartContainer for \"74e3c4a4051a25d4276374e92c83daaa0fe5a861a1520699792bcdb502865953\" returns successfully"
	
	
	==> coredns [3da642e62162f3b53ab9cca81c09853112a192a439b6cab3c5047ef0a7f63b04] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41400 - 52734 "HINFO IN 3852003297008482046.8189843040733732678. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014915759s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-570856
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-570856
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=default-k8s-diff-port-570856
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_38_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:38:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-570856
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:39:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:39:25 +0000   Wed, 19 Nov 2025 22:38:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:39:25 +0000   Wed, 19 Nov 2025 22:38:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:39:25 +0000   Wed, 19 Nov 2025 22:38:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:39:25 +0000   Wed, 19 Nov 2025 22:39:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-570856
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                78e41195-0661-4dc0-9108-7c4f38576a10
	  Boot ID:                    b3875353-65b3-44b7-ad72-afadd7e2486a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-4m8f2                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-default-k8s-diff-port-570856                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-n8jjs                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-570856             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-570856    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-n4868                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-570856             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 52s                kube-proxy       
	  Normal   NodeHasSufficientMemory  75s (x8 over 75s)  kubelet          Node default-k8s-diff-port-570856 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    75s (x8 over 75s)  kubelet          Node default-k8s-diff-port-570856 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     75s (x7 over 75s)  kubelet          Node default-k8s-diff-port-570856 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  60s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node default-k8s-diff-port-570856 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node default-k8s-diff-port-570856 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node default-k8s-diff-port-570856 status is now: NodeHasSufficientPID
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           56s                node-controller  Node default-k8s-diff-port-570856 event: Registered Node default-k8s-diff-port-570856 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-570856 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 21:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.032038] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[Nov19 21:18] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034282] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.730183] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.763794] kauditd_printk_skb: 36 callbacks suppressed
	[Nov19 21:50] hrtimer: interrupt took 11278311 ns
	
	
	==> etcd [7d268decdd0d9cc7d8445383e18deefcb2546926ad65b92e663c16dceaf5dba7] <==
	{"level":"warn","ts":"2025-11-19T22:38:31.510957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:31.580152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:31.654368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:31.682405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:31.718538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:31.769872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:31.826650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:31.853528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:31.907149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:31.947648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.007348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.040302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.089065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.142075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.187573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.246429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.284408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.356941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.395060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.438957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.484246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.534361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.578753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.604360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.830345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40306","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:39:38 up  1:21,  0 user,  load average: 3.06, 3.46, 2.85
	Linux default-k8s-diff-port-570856 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2644752343f75da9f774078a18f0ed03507320888681802aff4255970379b716] <==
	I1119 22:38:45.010942       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:38:45.087030       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 22:38:45.087210       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:38:45.087227       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:38:45.087242       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:38:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:38:45.313290       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:38:45.313311       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:38:45.313320       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:38:45.313677       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 22:39:15.312877       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 22:39:15.314028       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 22:39:15.314029       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 22:39:15.314107       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 22:39:16.513503       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:39:16.513537       1 metrics.go:72] Registering metrics
	I1119 22:39:16.513640       1 controller.go:711] "Syncing nftables rules"
	I1119 22:39:25.320002       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:39:25.320062       1 main.go:301] handling current node
	I1119 22:39:35.314240       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:39:35.314284       1 main.go:301] handling current node
	
	
	==> kube-apiserver [829c562f0f222bdcf3d0ec71ce8bbf82154469b6f01b9b3c5618df7fe63640f4] <==
	E1119 22:38:34.958031       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1119 22:38:35.005559       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:38:35.012650       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:38:35.037488       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:38:35.037789       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:38:35.053593       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:38:35.131021       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:38:35.275880       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:38:35.308030       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:38:35.308063       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:38:36.749579       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:38:36.813858       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:38:36.927509       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:38:36.935864       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 22:38:36.937220       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:38:36.950062       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:38:37.527347       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:38:38.153893       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:38:38.184681       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:38:38.203607       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:38:42.847411       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:38:42.885704       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:38:43.124192       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:38:43.528611       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1119 22:39:37.108049       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:59210: use of closed network connection
	
	
	==> kube-controller-manager [7036e1f00cb91c3a6b0c190abbd5baf8d233f9500feba9e54c191adab61fd1c6] <==
	I1119 22:38:42.750669       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 22:38:42.751037       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 22:38:42.751203       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 22:38:42.751488       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 22:38:42.751666       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 22:38:42.752429       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 22:38:42.752649       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:38:42.752805       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-570856"
	I1119 22:38:42.752893       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 22:38:42.755783       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:38:42.774721       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 22:38:42.777074       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:38:42.777259       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:38:42.777343       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:38:42.777467       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 22:38:42.778365       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 22:38:42.778606       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:38:42.778753       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 22:38:42.786198       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 22:38:42.791491       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-570856" podCIDRs=["10.244.0.0/24"]
	I1119 22:38:42.794386       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 22:38:42.801349       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:38:42.801772       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 22:38:42.831209       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:39:27.758901       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5d9cf5103ba441828281ac1312821dc9fdde8384b738c12b5a727db2c33097e1] <==
	I1119 22:38:45.087805       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:38:45.270816       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:38:45.374058       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:38:45.374105       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 22:38:45.374212       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:38:45.436313       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:38:45.436365       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:38:45.446434       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:38:45.446745       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:38:45.446760       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:38:45.449425       1 config.go:200] "Starting service config controller"
	I1119 22:38:45.449436       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:38:45.449453       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:38:45.449458       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:38:45.449468       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:38:45.449471       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:38:45.462358       1 config.go:309] "Starting node config controller"
	I1119 22:38:45.462381       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:38:45.462390       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:38:45.550476       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:38:45.550514       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:38:45.550554       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e4c4039c8a727b705651ad9bb3ca2fec84f852b52718607b609d2e5e58012bc1] <==
	I1119 22:38:35.971471       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:38:35.992815       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:38:35.993034       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:38:35.993372       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 22:38:35.994099       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1119 22:38:36.014537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1119 22:38:36.016210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:38:36.021184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:38:36.023913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:38:36.023970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:38:36.024014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:38:36.048310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:38:36.024152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:38:36.024235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:38:36.024272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:38:36.024504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:38:36.024562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:38:36.051622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:38:36.051800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:38:36.051966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:38:36.052611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:38:36.052983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:38:36.024051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:38:36.053213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1119 22:38:36.993853       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:38:39 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:39.451867    1482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-570856" podStartSLOduration=3.451847081 podStartE2EDuration="3.451847081s" podCreationTimestamp="2025-11-19 22:38:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:38:39.429474892 +0000 UTC m=+1.326808871" watchObservedRunningTime="2025-11-19 22:38:39.451847081 +0000 UTC m=+1.349181051"
	Nov 19 22:38:39 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:39.493417    1482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-570856" podStartSLOduration=1.493398569 podStartE2EDuration="1.493398569s" podCreationTimestamp="2025-11-19 22:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:38:39.457149448 +0000 UTC m=+1.354483435" watchObservedRunningTime="2025-11-19 22:38:39.493398569 +0000 UTC m=+1.390732556"
	Nov 19 22:38:39 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:39.538843    1482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-570856" podStartSLOduration=1.538821008 podStartE2EDuration="1.538821008s" podCreationTimestamp="2025-11-19 22:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:38:39.496719343 +0000 UTC m=+1.394053331" watchObservedRunningTime="2025-11-19 22:38:39.538821008 +0000 UTC m=+1.436154979"
	Nov 19 22:38:39 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:39.539228    1482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-570856" podStartSLOduration=1.5392188629999999 podStartE2EDuration="1.539218863s" podCreationTimestamp="2025-11-19 22:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:38:39.531280931 +0000 UTC m=+1.428614918" watchObservedRunningTime="2025-11-19 22:38:39.539218863 +0000 UTC m=+1.436552842"
	Nov 19 22:38:42 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:42.878403    1482 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 22:38:42 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:42.886355    1482 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 22:38:43 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:43.650379    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f07057ba-2012-4291-ba43-a3638f7c8c58-cni-cfg\") pod \"kindnet-n8jjs\" (UID: \"f07057ba-2012-4291-ba43-a3638f7c8c58\") " pod="kube-system/kindnet-n8jjs"
	Nov 19 22:38:43 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:43.658326    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/965b5310-35e9-4026-91b4-733b3eef9088-lib-modules\") pod \"kube-proxy-n4868\" (UID: \"965b5310-35e9-4026-91b4-733b3eef9088\") " pod="kube-system/kube-proxy-n4868"
	Nov 19 22:38:43 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:43.658527    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrzn6\" (UniqueName: \"kubernetes.io/projected/965b5310-35e9-4026-91b4-733b3eef9088-kube-api-access-xrzn6\") pod \"kube-proxy-n4868\" (UID: \"965b5310-35e9-4026-91b4-733b3eef9088\") " pod="kube-system/kube-proxy-n4868"
	Nov 19 22:38:43 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:43.658633    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f07057ba-2012-4291-ba43-a3638f7c8c58-xtables-lock\") pod \"kindnet-n8jjs\" (UID: \"f07057ba-2012-4291-ba43-a3638f7c8c58\") " pod="kube-system/kindnet-n8jjs"
	Nov 19 22:38:43 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:43.658707    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6vsm\" (UniqueName: \"kubernetes.io/projected/f07057ba-2012-4291-ba43-a3638f7c8c58-kube-api-access-p6vsm\") pod \"kindnet-n8jjs\" (UID: \"f07057ba-2012-4291-ba43-a3638f7c8c58\") " pod="kube-system/kindnet-n8jjs"
	Nov 19 22:38:43 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:43.658783    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/965b5310-35e9-4026-91b4-733b3eef9088-kube-proxy\") pod \"kube-proxy-n4868\" (UID: \"965b5310-35e9-4026-91b4-733b3eef9088\") " pod="kube-system/kube-proxy-n4868"
	Nov 19 22:38:43 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:43.658859    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/965b5310-35e9-4026-91b4-733b3eef9088-xtables-lock\") pod \"kube-proxy-n4868\" (UID: \"965b5310-35e9-4026-91b4-733b3eef9088\") " pod="kube-system/kube-proxy-n4868"
	Nov 19 22:38:43 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:43.658928    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f07057ba-2012-4291-ba43-a3638f7c8c58-lib-modules\") pod \"kindnet-n8jjs\" (UID: \"f07057ba-2012-4291-ba43-a3638f7c8c58\") " pod="kube-system/kindnet-n8jjs"
	Nov 19 22:38:43 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:43.822987    1482 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 19 22:38:45 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:45.751893    1482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n4868" podStartSLOduration=2.7518730529999997 podStartE2EDuration="2.751873053s" podCreationTimestamp="2025-11-19 22:38:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:38:45.722613735 +0000 UTC m=+7.619947714" watchObservedRunningTime="2025-11-19 22:38:45.751873053 +0000 UTC m=+7.649207032"
	Nov 19 22:38:48 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:48.222608    1482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-n8jjs" podStartSLOduration=5.222591257 podStartE2EDuration="5.222591257s" podCreationTimestamp="2025-11-19 22:38:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:38:45.755083565 +0000 UTC m=+7.652417692" watchObservedRunningTime="2025-11-19 22:38:48.222591257 +0000 UTC m=+10.119925228"
	Nov 19 22:39:25 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:39:25.383137    1482 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 22:39:25 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:39:25.516684    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5tx2\" (UniqueName: \"kubernetes.io/projected/2339c18e-d677-4777-b9a8-1df877bb86be-kube-api-access-c5tx2\") pod \"storage-provisioner\" (UID: \"2339c18e-d677-4777-b9a8-1df877bb86be\") " pod="kube-system/storage-provisioner"
	Nov 19 22:39:25 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:39:25.516924    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2339c18e-d677-4777-b9a8-1df877bb86be-tmp\") pod \"storage-provisioner\" (UID: \"2339c18e-d677-4777-b9a8-1df877bb86be\") " pod="kube-system/storage-provisioner"
	Nov 19 22:39:25 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:39:25.517010    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92627362-0048-4b1a-af4e-7f9d8c53a483-config-volume\") pod \"coredns-66bc5c9577-4m8f2\" (UID: \"92627362-0048-4b1a-af4e-7f9d8c53a483\") " pod="kube-system/coredns-66bc5c9577-4m8f2"
	Nov 19 22:39:25 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:39:25.517053    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7mvf\" (UniqueName: \"kubernetes.io/projected/92627362-0048-4b1a-af4e-7f9d8c53a483-kube-api-access-x7mvf\") pod \"coredns-66bc5c9577-4m8f2\" (UID: \"92627362-0048-4b1a-af4e-7f9d8c53a483\") " pod="kube-system/coredns-66bc5c9577-4m8f2"
	Nov 19 22:39:26 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:39:26.866528    1482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4m8f2" podStartSLOduration=43.866494741 podStartE2EDuration="43.866494741s" podCreationTimestamp="2025-11-19 22:38:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:39:26.845495192 +0000 UTC m=+48.742829171" watchObservedRunningTime="2025-11-19 22:39:26.866494741 +0000 UTC m=+48.763828753"
	Nov 19 22:39:26 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:39:26.867258    1482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.867245969 podStartE2EDuration="40.867245969s" podCreationTimestamp="2025-11-19 22:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:39:26.866303141 +0000 UTC m=+48.763637120" watchObservedRunningTime="2025-11-19 22:39:26.867245969 +0000 UTC m=+48.764579940"
	Nov 19 22:39:29 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:39:29.042907    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrdw9\" (UniqueName: \"kubernetes.io/projected/7195bbcd-aea0-4b92-b3d2-0e76651191f2-kube-api-access-qrdw9\") pod \"busybox\" (UID: \"7195bbcd-aea0-4b92-b3d2-0e76651191f2\") " pod="default/busybox"
	
	
	==> storage-provisioner [ac19323559deb019c92d46623f8f93f141457384cef6ce6e8a9841354bf572f9] <==
	I1119 22:39:26.006629       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:39:26.035356       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:39:26.035415       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:39:26.039606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:26.058483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:39:26.058839       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:39:26.065517       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-570856_309ed0e4-ef2c-4f9d-b78b-7da3ba544427!
	I1119 22:39:26.062025       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"27af889f-69f7-4c9e-b758-7ba8f06ea50a", APIVersion:"v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-570856_309ed0e4-ef2c-4f9d-b78b-7da3ba544427 became leader
	W1119 22:39:26.069946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:26.075856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:39:26.166726       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-570856_309ed0e4-ef2c-4f9d-b78b-7da3ba544427!
	W1119 22:39:28.084056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:28.091670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:30.096715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:30.103558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:32.107742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:32.117126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:34.120355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:34.125285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:36.128986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:36.133905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:38.137470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:38.149698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-570856 -n default-k8s-diff-port-570856
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-570856 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-570856
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-570856:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6c73c273c7b038693db24d99dfbb30acc51038433277e4b235b2c5ad0e88c602",
	        "Created": "2025-11-19T22:38:07.504803766Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 214567,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:38:07.603062132Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/6c73c273c7b038693db24d99dfbb30acc51038433277e4b235b2c5ad0e88c602/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6c73c273c7b038693db24d99dfbb30acc51038433277e4b235b2c5ad0e88c602/hostname",
	        "HostsPath": "/var/lib/docker/containers/6c73c273c7b038693db24d99dfbb30acc51038433277e4b235b2c5ad0e88c602/hosts",
	        "LogPath": "/var/lib/docker/containers/6c73c273c7b038693db24d99dfbb30acc51038433277e4b235b2c5ad0e88c602/6c73c273c7b038693db24d99dfbb30acc51038433277e4b235b2c5ad0e88c602-json.log",
	        "Name": "/default-k8s-diff-port-570856",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-570856:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-570856",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6c73c273c7b038693db24d99dfbb30acc51038433277e4b235b2c5ad0e88c602",
	                "LowerDir": "/var/lib/docker/overlay2/86ab49af5f948a1a5c976977f23c42663d73cdc908842eb49b25686c33aa6cf2-init/diff:/var/lib/docker/overlay2/b6ebc9601ea0ae08484f263713f3358dd93f7748ebfafbd9155229908dee9606/diff",
	                "MergedDir": "/var/lib/docker/overlay2/86ab49af5f948a1a5c976977f23c42663d73cdc908842eb49b25686c33aa6cf2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/86ab49af5f948a1a5c976977f23c42663d73cdc908842eb49b25686c33aa6cf2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/86ab49af5f948a1a5c976977f23c42663d73cdc908842eb49b25686c33aa6cf2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-570856",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-570856/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-570856",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-570856",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-570856",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2d5ab0decbee6ec3a9f7deffefb376d8c2a3acc5e4211707c845f8a635aa7fb0",
	            "SandboxKey": "/var/run/docker/netns/2d5ab0decbee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-570856": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:ca:56:88:07:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0f1dbc601a674795e4d1b7ef6c43743f5fa7dc65e3242142ad674b4d86c827a0",
	                    "EndpointID": "0abe73e0e2f058acdd1275bb70410bb04d4c2ac43764ee16a95613ed71ee9b48",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-570856",
	                        "6c73c273c7b0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-570856 -n default-k8s-diff-port-570856
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-570856 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-570856 logs -n 25: (1.180868517s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p cilium-156590 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-156590                │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-156590                │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ ssh     │ -p cilium-156590 sudo crio config                                                                                                                                                                                                                   │ cilium-156590                │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ delete  │ -p cilium-156590                                                                                                                                                                                                                                    │ cilium-156590                │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p cert-expiration-750367 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-750367       │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ ssh     │ force-systemd-env-388402 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-388402     │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ delete  │ -p force-systemd-env-388402                                                                                                                                                                                                                         │ force-systemd-env-388402     │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p cert-options-815306 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-815306          │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ cert-options-815306 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-815306          │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p cert-options-815306 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-815306          │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ delete  │ -p cert-options-815306                                                                                                                                                                                                                              │ cert-options-815306          │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ start   │ -p old-k8s-version-264160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:36 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-264160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:36 UTC │ 19 Nov 25 22:36 UTC │
	│ stop    │ -p old-k8s-version-264160 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:36 UTC │ 19 Nov 25 22:37 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-264160 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:37 UTC │
	│ start   │ -p old-k8s-version-264160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:37 UTC │
	│ image   │ old-k8s-version-264160 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:37 UTC │
	│ pause   │ -p old-k8s-version-264160 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:37 UTC │
	│ start   │ -p cert-expiration-750367 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-750367       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:38 UTC │
	│ unpause │ -p old-k8s-version-264160 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:37 UTC │
	│ delete  │ -p old-k8s-version-264160                                                                                                                                                                                                                           │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:38 UTC │
	│ delete  │ -p old-k8s-version-264160                                                                                                                                                                                                                           │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:38 UTC │ 19 Nov 25 22:38 UTC │
	│ start   │ -p default-k8s-diff-port-570856 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-570856 │ jenkins │ v1.37.0 │ 19 Nov 25 22:38 UTC │ 19 Nov 25 22:39 UTC │
	│ delete  │ -p cert-expiration-750367                                                                                                                                                                                                                           │ cert-expiration-750367       │ jenkins │ v1.37.0 │ 19 Nov 25 22:38 UTC │ 19 Nov 25 22:38 UTC │
	│ start   │ -p embed-certs-227235 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-227235           │ jenkins │ v1.37.0 │ 19 Nov 25 22:38 UTC │ 19 Nov 25 22:39 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:38:08
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:38:08.697293  215017 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:38:08.704083  215017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:38:08.704139  215017 out.go:374] Setting ErrFile to fd 2...
	I1119 22:38:08.704160  215017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:38:08.706471  215017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
	I1119 22:38:08.707066  215017 out.go:368] Setting JSON to false
	I1119 22:38:08.712552  215017 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4810,"bootTime":1763587079,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1119 22:38:08.712658  215017 start.go:143] virtualization:  
	I1119 22:38:08.726924  215017 out.go:179] * [embed-certs-227235] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:38:08.730374  215017 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:38:08.730495  215017 notify.go:221] Checking for updates...
	I1119 22:38:08.738314  215017 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:38:08.741839  215017 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:38:08.750729  215017 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube
	I1119 22:38:08.753969  215017 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:38:08.758263  215017 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:38:08.761943  215017 config.go:182] Loaded profile config "default-k8s-diff-port-570856": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:38:08.762046  215017 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:38:08.820199  215017 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:38:08.820314  215017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:38:08.984129  215017 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-19 22:38:08.967483926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:38:08.984262  215017 docker.go:319] overlay module found
	I1119 22:38:08.987717  215017 out.go:179] * Using the docker driver based on user configuration
	I1119 22:38:08.990549  215017 start.go:309] selected driver: docker
	I1119 22:38:08.990571  215017 start.go:930] validating driver "docker" against <nil>
	I1119 22:38:08.990586  215017 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:38:08.991509  215017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:38:09.111798  215017 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-19 22:38:09.089203249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:38:09.111938  215017 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:38:09.112256  215017 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:38:09.116504  215017 out.go:179] * Using Docker driver with root privileges
	I1119 22:38:09.124274  215017 cni.go:84] Creating CNI manager for ""
	I1119 22:38:09.124350  215017 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:38:09.124363  215017 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:38:09.124453  215017 start.go:353] cluster config:
	{Name:embed-certs-227235 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-227235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:38:09.127735  215017 out.go:179] * Starting "embed-certs-227235" primary control-plane node in "embed-certs-227235" cluster
	I1119 22:38:09.130607  215017 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 22:38:09.133523  215017 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:38:09.136391  215017 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:38:09.136441  215017 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1119 22:38:09.136452  215017 cache.go:65] Caching tarball of preloaded images
	I1119 22:38:09.136462  215017 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:38:09.136539  215017 preload.go:238] Found /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1119 22:38:09.136547  215017 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1119 22:38:09.136651  215017 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/config.json ...
	I1119 22:38:09.136675  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/config.json: {Name:mk1b25f2623abcf89d25348624125d2f29b1b611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:09.183694  215017 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:38:09.183719  215017 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:38:09.183733  215017 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:38:09.183759  215017 start.go:360] acquireMachinesLock for embed-certs-227235: {Name:mk510c3d29263bf54ad7e262aba43b0a3739a3e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:38:09.184753  215017 start.go:364] duration metric: took 969.151µs to acquireMachinesLock for "embed-certs-227235"
	I1119 22:38:09.184791  215017 start.go:93] Provisioning new machine with config: &{Name:embed-certs-227235 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-227235 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:38:09.184859  215017 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:38:07.391014  213719 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-570856:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.786525535s)
	I1119 22:38:07.391041  213719 kic.go:203] duration metric: took 4.786659493s to extract preloaded images to volume ...
	W1119 22:38:07.391183  213719 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 22:38:07.391347  213719 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:38:07.481611  213719 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-570856 --name default-k8s-diff-port-570856 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-570856 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-570856 --network default-k8s-diff-port-570856 --ip 192.168.76.2 --volume default-k8s-diff-port-570856:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:38:07.963072  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Running}}
	I1119 22:38:07.992676  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:08.024300  213719 cli_runner.go:164] Run: docker exec default-k8s-diff-port-570856 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:38:08.120309  213719 oci.go:144] the created container "default-k8s-diff-port-570856" has a running status.
	I1119 22:38:08.120344  213719 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa...
	I1119 22:38:09.379092  213719 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:38:09.429394  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:09.452972  213719 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:38:09.452994  213719 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-570856 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:38:09.517582  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:09.543798  213719 machine.go:94] provisionDockerMachine start ...
	I1119 22:38:09.543906  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:09.574203  213719 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:09.574537  213719 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1119 22:38:09.574556  213719 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:38:09.753905  213719 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-570856
	
	I1119 22:38:09.753978  213719 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-570856"
	I1119 22:38:09.754102  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:09.788736  213719 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:09.789069  213719 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1119 22:38:09.789083  213719 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-570856 && echo "default-k8s-diff-port-570856" | sudo tee /etc/hostname
	I1119 22:38:10.027975  213719 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-570856
	
	I1119 22:38:10.028087  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:10.053594  213719 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:10.053941  213719 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1119 22:38:10.053963  213719 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-570856' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-570856/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-570856' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:38:10.228136  213719 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:38:10.228163  213719 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-2347/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-2347/.minikube}
	I1119 22:38:10.228198  213719 ubuntu.go:190] setting up certificates
	I1119 22:38:10.228211  213719 provision.go:84] configureAuth start
	I1119 22:38:10.228271  213719 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-570856
	I1119 22:38:10.260529  213719 provision.go:143] copyHostCerts
	I1119 22:38:10.260589  213719 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem, removing ...
	I1119 22:38:10.260598  213719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem
	I1119 22:38:10.262543  213719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem (1082 bytes)
	I1119 22:38:10.262680  213719 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem, removing ...
	I1119 22:38:10.262696  213719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem
	I1119 22:38:10.262738  213719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem (1123 bytes)
	I1119 22:38:10.262811  213719 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem, removing ...
	I1119 22:38:10.262821  213719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem
	I1119 22:38:10.262848  213719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem (1675 bytes)
	I1119 22:38:10.262912  213719 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-570856 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-570856 localhost minikube]
	I1119 22:38:10.546932  213719 provision.go:177] copyRemoteCerts
	I1119 22:38:10.547006  213719 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:38:10.547053  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:10.566569  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:10.670710  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:38:10.689919  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 22:38:10.709802  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:38:10.729254  213719 provision.go:87] duration metric: took 501.020286ms to configureAuth
	I1119 22:38:10.729341  213719 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:38:10.729558  213719 config.go:182] Loaded profile config "default-k8s-diff-port-570856": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:38:10.729599  213719 machine.go:97] duration metric: took 1.185770725s to provisionDockerMachine
	I1119 22:38:10.729629  213719 client.go:176] duration metric: took 8.893120772s to LocalClient.Create
	I1119 22:38:10.729671  213719 start.go:167] duration metric: took 8.893208625s to libmachine.API.Create "default-k8s-diff-port-570856"
	I1119 22:38:10.729697  213719 start.go:293] postStartSetup for "default-k8s-diff-port-570856" (driver="docker")
	I1119 22:38:10.729723  213719 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:38:10.729835  213719 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:38:10.729907  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:10.749040  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:10.851117  213719 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:38:10.854970  213719 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:38:10.855002  213719 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:38:10.855018  213719 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/addons for local assets ...
	I1119 22:38:10.855073  213719 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/files for local assets ...
	I1119 22:38:10.855157  213719 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem -> 41442.pem in /etc/ssl/certs
	I1119 22:38:10.855262  213719 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:38:10.863647  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:38:10.886722  213719 start.go:296] duration metric: took 156.987573ms for postStartSetup
	I1119 22:38:10.887078  213719 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-570856
	I1119 22:38:10.911718  213719 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/config.json ...
	I1119 22:38:10.911987  213719 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:38:10.912028  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:10.930471  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:11.027896  213719 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:38:11.033540  213719 start.go:128] duration metric: took 9.200775241s to createHost
	I1119 22:38:11.033562  213719 start.go:83] releasing machines lock for "default-k8s-diff-port-570856", held for 9.200980978s
	I1119 22:38:11.033643  213719 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-570856
	I1119 22:38:11.053285  213719 ssh_runner.go:195] Run: cat /version.json
	I1119 22:38:11.053332  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:11.053561  213719 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:38:11.053645  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:11.092834  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:11.096401  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:11.213924  213719 ssh_runner.go:195] Run: systemctl --version
	I1119 22:38:11.315479  213719 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:38:11.320121  213719 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:38:11.320192  213719 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:38:11.356242  213719 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 22:38:11.356267  213719 start.go:496] detecting cgroup driver to use...
	I1119 22:38:11.356302  213719 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 22:38:11.356353  213719 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:38:11.373019  213719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:38:11.387519  213719 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:38:11.387580  213719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:38:11.404728  213719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:38:11.423798  213719 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:38:11.599278  213719 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:38:11.778834  213719 docker.go:234] disabling docker service ...
	I1119 22:38:11.778912  213719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:38:11.811353  213719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:38:11.835015  213719 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:38:11.988384  213719 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:38:12.144244  213719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:38:12.158812  213719 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:38:12.181589  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 22:38:12.191717  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:38:12.200100  213719 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1119 22:38:12.200165  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1119 22:38:12.208392  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:38:12.216869  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:38:12.225624  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:38:12.234125  213719 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:38:12.241943  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:38:12.250703  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:38:12.259235  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:38:12.267694  213719 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:38:12.275336  213719 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:38:12.282663  213719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:12.447019  213719 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:38:12.641085  213719 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:38:12.641164  213719 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:38:12.647323  213719 start.go:564] Will wait 60s for crictl version
	I1119 22:38:12.647400  213719 ssh_runner.go:195] Run: which crictl
	I1119 22:38:12.654067  213719 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:38:12.706495  213719 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:38:12.706598  213719 ssh_runner.go:195] Run: containerd --version
	I1119 22:38:12.728227  213719 ssh_runner.go:195] Run: containerd --version
	I1119 22:38:12.756769  213719 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 22:38:09.188165  215017 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:38:09.188412  215017 start.go:159] libmachine.API.Create for "embed-certs-227235" (driver="docker")
	I1119 22:38:09.188460  215017 client.go:173] LocalClient.Create starting
	I1119 22:38:09.188522  215017 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem
	I1119 22:38:09.188557  215017 main.go:143] libmachine: Decoding PEM data...
	I1119 22:38:09.188575  215017 main.go:143] libmachine: Parsing certificate...
	I1119 22:38:09.188626  215017 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem
	I1119 22:38:09.188645  215017 main.go:143] libmachine: Decoding PEM data...
	I1119 22:38:09.188658  215017 main.go:143] libmachine: Parsing certificate...
	I1119 22:38:09.189025  215017 cli_runner.go:164] Run: docker network inspect embed-certs-227235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:38:09.226353  215017 cli_runner.go:211] docker network inspect embed-certs-227235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:38:09.227297  215017 network_create.go:284] running [docker network inspect embed-certs-227235] to gather additional debugging logs...
	I1119 22:38:09.227404  215017 cli_runner.go:164] Run: docker network inspect embed-certs-227235
	W1119 22:38:09.248961  215017 cli_runner.go:211] docker network inspect embed-certs-227235 returned with exit code 1
	I1119 22:38:09.248988  215017 network_create.go:287] error running [docker network inspect embed-certs-227235]: docker network inspect embed-certs-227235: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-227235 not found
	I1119 22:38:09.249019  215017 network_create.go:289] output of [docker network inspect embed-certs-227235]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-227235 not found
	
	** /stderr **
	I1119 22:38:09.249110  215017 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:38:09.295459  215017 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b0fa93c84379 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:8f:4f:8f:5a:a3} reservation:<nil>}
	I1119 22:38:09.295758  215017 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-141c656f658f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:30:08:ea:1a:b9} reservation:<nil>}
	I1119 22:38:09.296184  215017 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-aae633a5ffae IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:73:d8:2e:30:94} reservation:<nil>}
	I1119 22:38:09.296454  215017 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0f1dbc601a67 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:de:02:5d:17:f2:79} reservation:<nil>}
	I1119 22:38:09.296821  215017 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a30110}
	I1119 22:38:09.296836  215017 network_create.go:124] attempt to create docker network embed-certs-227235 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1119 22:38:09.296890  215017 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-227235 embed-certs-227235
	I1119 22:38:09.389450  215017 network_create.go:108] docker network embed-certs-227235 192.168.85.0/24 created
	I1119 22:38:09.389488  215017 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-227235" container
	I1119 22:38:09.389570  215017 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:38:09.426012  215017 cli_runner.go:164] Run: docker volume create embed-certs-227235 --label name.minikube.sigs.k8s.io=embed-certs-227235 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:38:09.458413  215017 oci.go:103] Successfully created a docker volume embed-certs-227235
	I1119 22:38:09.458493  215017 cli_runner.go:164] Run: docker run --rm --name embed-certs-227235-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-227235 --entrypoint /usr/bin/test -v embed-certs-227235:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:38:10.048314  215017 oci.go:107] Successfully prepared a docker volume embed-certs-227235
	I1119 22:38:10.048380  215017 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:38:10.048394  215017 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 22:38:10.048475  215017 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-227235:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 22:38:12.761129  213719 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-570856 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:38:12.776448  213719 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 22:38:12.782082  213719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:38:12.793881  213719 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-570856 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-570856 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:38:12.794007  213719 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:38:12.794066  213719 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:38:12.828546  213719 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:38:12.828565  213719 containerd.go:534] Images already preloaded, skipping extraction
	I1119 22:38:12.828628  213719 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:38:12.874453  213719 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:38:12.874474  213719 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:38:12.874485  213719 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 containerd true true} ...
	I1119 22:38:12.874575  213719 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-570856 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-570856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:38:12.874636  213719 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:38:12.913225  213719 cni.go:84] Creating CNI manager for ""
	I1119 22:38:12.913245  213719 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:38:12.913259  213719 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:38:12.913282  213719 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-570856 NodeName:default-k8s-diff-port-570856 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:38:12.913398  213719 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-570856"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:38:12.913465  213719 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:38:12.935388  213719 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:38:12.935468  213719 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:38:12.971226  213719 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1119 22:38:13.007966  213719 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:38:13.024911  213719 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2241 bytes)
	I1119 22:38:13.042516  213719 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:38:13.046335  213719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:38:13.059831  213719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:13.191953  213719 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:38:13.211424  213719 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856 for IP: 192.168.76.2
	I1119 22:38:13.211448  213719 certs.go:195] generating shared ca certs ...
	I1119 22:38:13.211464  213719 certs.go:227] acquiring lock for ca certs: {Name:mk76285c445bf14c1e73dedba3201c9181209ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:13.211598  213719 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key
	I1119 22:38:13.211646  213719 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key
	I1119 22:38:13.211656  213719 certs.go:257] generating profile certs ...
	I1119 22:38:13.211720  213719 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.key
	I1119 22:38:13.211738  213719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.crt with IP's: []
	I1119 22:38:13.477759  213719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.crt ...
	I1119 22:38:13.477790  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.crt: {Name:mk4af4f401c57a7635e92da9feef7f2a7cfe3346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:13.477979  213719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.key ...
	I1119 22:38:13.477993  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.key: {Name:mkf947f0bf4e302c69721a8e2f74d4a272d67d4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:13.478093  213719 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key.8301174b
	I1119 22:38:13.478112  213719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt.8301174b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1119 22:38:13.929859  213719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt.8301174b ...
	I1119 22:38:13.929894  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt.8301174b: {Name:mkb8c9d5541b894a86911cf54efc4b7ac6afa1c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:13.930079  213719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key.8301174b ...
	I1119 22:38:13.930094  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key.8301174b: {Name:mk87a24e67d10968973a6f22462b3f5c313a93de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:13.930252  213719 certs.go:382] copying /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt.8301174b -> /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt
	I1119 22:38:13.930347  213719 certs.go:386] copying /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key.8301174b -> /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key
	I1119 22:38:13.930411  213719 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.key
	I1119 22:38:13.930431  213719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.crt with IP's: []
	I1119 22:38:14.332796  213719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.crt ...
	I1119 22:38:14.332825  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.crt: {Name:mkc687d4f88c0016e52dc106cbb67f62cb641716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:14.339910  213719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.key ...
	I1119 22:38:14.339932  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.key: {Name:mk85a94508f4f26fe196530cf3fdf265d53e1f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:14.340150  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem (1338 bytes)
	W1119 22:38:14.340197  213719 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144_empty.pem, impossibly tiny 0 bytes
	I1119 22:38:14.340211  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:38:14.340237  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:38:14.340265  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:38:14.340292  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem (1675 bytes)
	I1119 22:38:14.340340  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:38:14.340962  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:38:14.361559  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1119 22:38:14.382612  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:38:14.402496  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:38:14.420924  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:38:14.441447  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:38:14.460685  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:38:14.479294  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:38:14.497456  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem --> /usr/share/ca-certificates/4144.pem (1338 bytes)
	I1119 22:38:14.516533  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /usr/share/ca-certificates/41442.pem (1708 bytes)
	I1119 22:38:14.535911  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:38:14.553295  213719 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:38:14.567201  213719 ssh_runner.go:195] Run: openssl version
	I1119 22:38:14.573427  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4144.pem && ln -fs /usr/share/ca-certificates/4144.pem /etc/ssl/certs/4144.pem"
	I1119 22:38:14.582011  213719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4144.pem
	I1119 22:38:14.585596  213719 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/4144.pem
	I1119 22:38:14.585711  213719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4144.pem
	I1119 22:38:14.626575  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4144.pem /etc/ssl/certs/51391683.0"
	I1119 22:38:14.635818  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41442.pem && ln -fs /usr/share/ca-certificates/41442.pem /etc/ssl/certs/41442.pem"
	I1119 22:38:14.644258  213719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41442.pem
	I1119 22:38:14.648142  213719 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/41442.pem
	I1119 22:38:14.648249  213719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41442.pem
	I1119 22:38:14.689425  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41442.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:38:14.698767  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:38:14.708989  213719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:14.713003  213719 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:14.713064  213719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:14.755515  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:38:14.766003  213719 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:38:14.769904  213719 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:38:14.769997  213719 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-570856 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-570856 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:38:14.770068  213719 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:38:14.770172  213719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:38:14.831712  213719 cri.go:89] found id: ""
	I1119 22:38:14.831793  213719 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:38:14.844012  213719 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:38:14.859844  213719 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:38:14.859902  213719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:38:14.875606  213719 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:38:14.875626  213719 kubeadm.go:158] found existing configuration files:
	
	I1119 22:38:14.875678  213719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1119 22:38:14.887366  213719 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:38:14.887426  213719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:38:14.898741  213719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1119 22:38:14.907757  213719 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:38:14.907816  213719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:38:14.915056  213719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1119 22:38:14.925190  213719 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:38:14.925246  213719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:38:14.933043  213719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1119 22:38:14.943964  213719 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:38:14.944080  213719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:38:14.956850  213719 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:38:15.022467  213719 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:38:15.022528  213719 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:38:15.074445  213719 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:38:15.074520  213719 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 22:38:15.074585  213719 kubeadm.go:319] OS: Linux
	I1119 22:38:15.074665  213719 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:38:15.074741  213719 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 22:38:15.074834  213719 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:38:15.074895  213719 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:38:15.074955  213719 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:38:15.075040  213719 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:38:15.075127  213719 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:38:15.075186  213719 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:38:15.075235  213719 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 22:38:15.163382  213719 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:38:15.163500  213719 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:38:15.163599  213719 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:38:15.178538  213719 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:38:15.183821  213719 out.go:252]   - Generating certificates and keys ...
	I1119 22:38:15.183926  213719 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:38:15.184002  213719 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:38:16.331729  213719 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:38:14.780147  215017 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-227235:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.73163045s)
	I1119 22:38:14.780195  215017 kic.go:203] duration metric: took 4.731797196s to extract preloaded images to volume ...
	W1119 22:38:14.780320  215017 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 22:38:14.780432  215017 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:38:14.866741  215017 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-227235 --name embed-certs-227235 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-227235 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-227235 --network embed-certs-227235 --ip 192.168.85.2 --volume embed-certs-227235:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:38:15.242087  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Running}}
	I1119 22:38:15.266134  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:15.289559  215017 cli_runner.go:164] Run: docker exec embed-certs-227235 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:38:15.358592  215017 oci.go:144] the created container "embed-certs-227235" has a running status.
	I1119 22:38:15.358618  215017 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa...
	I1119 22:38:16.151858  215017 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:38:16.174089  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:16.193774  215017 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:38:16.193801  215017 kic_runner.go:114] Args: [docker exec --privileged embed-certs-227235 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:38:16.253392  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:16.274685  215017 machine.go:94] provisionDockerMachine start ...
	I1119 22:38:16.274793  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:16.295933  215017 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:16.296265  215017 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1119 22:38:16.296279  215017 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:38:16.296925  215017 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 22:38:16.648850  213719 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:38:17.027534  213719 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:38:17.535405  213719 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:38:18.457071  213719 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:38:18.457651  213719 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-570856 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:38:18.804201  213719 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:38:18.804516  213719 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-570856 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:38:19.251890  213719 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:38:19.443919  213719 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:38:19.989042  213719 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:38:19.989481  213719 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:38:20.248156  213719 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:38:20.575822  213719 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:38:21.322497  213719 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:38:21.582497  213719 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:38:22.046631  213719 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:38:22.048792  213719 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:38:22.056417  213719 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:38:19.458283  215017 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-227235
	
	I1119 22:38:19.458361  215017 ubuntu.go:182] provisioning hostname "embed-certs-227235"
	I1119 22:38:19.458439  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:19.482663  215017 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:19.482955  215017 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1119 22:38:19.482966  215017 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-227235 && echo "embed-certs-227235" | sudo tee /etc/hostname
	I1119 22:38:19.668227  215017 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-227235
	
	I1119 22:38:19.668364  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:19.696161  215017 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:19.696518  215017 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1119 22:38:19.696542  215017 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-227235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-227235/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-227235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:38:19.844090  215017 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:38:19.844206  215017 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-2347/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-2347/.minikube}
	I1119 22:38:19.844292  215017 ubuntu.go:190] setting up certificates
	I1119 22:38:19.844349  215017 provision.go:84] configureAuth start
	I1119 22:38:19.844460  215017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-227235
	I1119 22:38:19.871920  215017 provision.go:143] copyHostCerts
	I1119 22:38:19.871992  215017 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem, removing ...
	I1119 22:38:19.872014  215017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem
	I1119 22:38:19.872097  215017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem (1082 bytes)
	I1119 22:38:19.872221  215017 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem, removing ...
	I1119 22:38:19.872227  215017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem
	I1119 22:38:19.872260  215017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem (1123 bytes)
	I1119 22:38:19.872326  215017 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem, removing ...
	I1119 22:38:19.872335  215017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem
	I1119 22:38:19.872358  215017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem (1675 bytes)
	I1119 22:38:19.872412  215017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem org=jenkins.embed-certs-227235 san=[127.0.0.1 192.168.85.2 embed-certs-227235 localhost minikube]
	I1119 22:38:20.323404  215017 provision.go:177] copyRemoteCerts
	I1119 22:38:20.323526  215017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:38:20.323586  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:20.356892  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:20.470993  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:38:20.504362  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1119 22:38:20.524210  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:38:20.544124  215017 provision.go:87] duration metric: took 699.7216ms to configureAuth
	I1119 22:38:20.544197  215017 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:38:20.544412  215017 config.go:182] Loaded profile config "embed-certs-227235": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:38:20.544464  215017 machine.go:97] duration metric: took 4.26975387s to provisionDockerMachine
	I1119 22:38:20.544486  215017 client.go:176] duration metric: took 11.356016876s to LocalClient.Create
	I1119 22:38:20.544525  215017 start.go:167] duration metric: took 11.356113575s to libmachine.API.Create "embed-certs-227235"
	I1119 22:38:20.544554  215017 start.go:293] postStartSetup for "embed-certs-227235" (driver="docker")
	I1119 22:38:20.544591  215017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:38:20.544678  215017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:38:20.544756  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:20.565300  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:20.667067  215017 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:38:20.670916  215017 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:38:20.670945  215017 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:38:20.670955  215017 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/addons for local assets ...
	I1119 22:38:20.671006  215017 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/files for local assets ...
	I1119 22:38:20.671083  215017 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem -> 41442.pem in /etc/ssl/certs
	I1119 22:38:20.671184  215017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:38:20.680266  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:38:20.699713  215017 start.go:296] duration metric: took 155.103351ms for postStartSetup
	I1119 22:38:20.700150  215017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-227235
	I1119 22:38:20.718277  215017 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/config.json ...
	I1119 22:38:20.718546  215017 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:38:20.718585  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:20.738828  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:20.841296  215017 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:38:20.847214  215017 start.go:128] duration metric: took 11.662337268s to createHost
	I1119 22:38:20.847254  215017 start.go:83] releasing machines lock for "embed-certs-227235", held for 11.662472169s
	I1119 22:38:20.847344  215017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-227235
	I1119 22:38:20.867867  215017 ssh_runner.go:195] Run: cat /version.json
	I1119 22:38:20.867920  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:20.868163  215017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:38:20.868220  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:20.898565  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:20.913281  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:21.018482  215017 ssh_runner.go:195] Run: systemctl --version
	I1119 22:38:21.126924  215017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:38:21.133433  215017 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:38:21.133571  215017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:38:21.174802  215017 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 22:38:21.174882  215017 start.go:496] detecting cgroup driver to use...
	I1119 22:38:21.174939  215017 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 22:38:21.175034  215017 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:38:21.196072  215017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:38:21.213194  215017 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:38:21.213331  215017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:38:21.235649  215017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:38:21.258133  215017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:38:21.407367  215017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:38:21.569958  215017 docker.go:234] disabling docker service ...
	I1119 22:38:21.570075  215017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:38:21.595432  215017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:38:21.609975  215017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:38:21.765673  215017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:38:21.920710  215017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:38:21.936161  215017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:38:21.954615  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 22:38:21.964563  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:38:21.973986  215017 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1119 22:38:21.974106  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1119 22:38:21.983607  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:38:21.993186  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:38:22.003994  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:38:22.014801  215017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:38:22.024224  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:38:22.034441  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:38:22.044428  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:38:22.055950  215017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:38:22.067426  215017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:38:22.076858  215017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:22.269285  215017 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:38:22.431475  215017 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:38:22.431618  215017 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:38:22.438650  215017 start.go:564] Will wait 60s for crictl version
	I1119 22:38:22.438766  215017 ssh_runner.go:195] Run: which crictl
	I1119 22:38:22.442622  215017 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:38:22.484750  215017 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:38:22.484877  215017 ssh_runner.go:195] Run: containerd --version
	I1119 22:38:22.511742  215017 ssh_runner.go:195] Run: containerd --version
	I1119 22:38:22.537445  215017 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 22:38:22.540815  215017 cli_runner.go:164] Run: docker network inspect embed-certs-227235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:38:22.557518  215017 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 22:38:22.561769  215017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:38:22.577497  215017 kubeadm.go:884] updating cluster {Name:embed-certs-227235 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-227235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:38:22.577609  215017 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:38:22.577676  215017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:38:22.612620  215017 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:38:22.612641  215017 containerd.go:534] Images already preloaded, skipping extraction
	I1119 22:38:22.612700  215017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:38:22.639391  215017 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:38:22.639472  215017 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:38:22.639495  215017 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1119 22:38:22.639629  215017 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-227235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-227235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:38:22.639737  215017 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:38:22.675658  215017 cni.go:84] Creating CNI manager for ""
	I1119 22:38:22.675677  215017 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:38:22.675692  215017 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:38:22.675717  215017 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-227235 NodeName:embed-certs-227235 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:38:22.675829  215017 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-227235"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:38:22.675898  215017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:38:22.685785  215017 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:38:22.685854  215017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:38:22.694496  215017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1119 22:38:22.708805  215017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:38:22.723606  215017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1119 22:38:22.738717  215017 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:38:22.742965  215017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:38:22.753270  215017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:22.906872  215017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:38:22.924949  215017 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235 for IP: 192.168.85.2
	I1119 22:38:22.925022  215017 certs.go:195] generating shared ca certs ...
	I1119 22:38:22.925062  215017 certs.go:227] acquiring lock for ca certs: {Name:mk76285c445bf14c1e73dedba3201c9181209ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:22.925256  215017 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key
	I1119 22:38:22.925342  215017 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key
	I1119 22:38:22.925388  215017 certs.go:257] generating profile certs ...
	I1119 22:38:22.925497  215017 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.key
	I1119 22:38:22.925541  215017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.crt with IP's: []
	I1119 22:38:22.060241  213719 out.go:252]   - Booting up control plane ...
	I1119 22:38:22.060350  213719 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:38:22.060434  213719 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:38:22.060504  213719 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:38:22.079017  213719 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:38:22.079368  213719 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:38:22.087584  213719 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:38:22.087933  213719 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:38:22.087982  213719 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:38:22.256548  213719 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:38:22.256676  213719 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:38:23.257718  213719 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001280368s
	I1119 22:38:23.261499  213719 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:38:23.261885  213719 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1119 22:38:23.262185  213719 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:38:23.262436  213719 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:38:23.993413  215017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.crt ...
	I1119 22:38:23.993490  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.crt: {Name:mk9390e430c2adf83fa83c8b0fc6b544e7c6ac73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:23.993723  215017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.key ...
	I1119 22:38:23.993760  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.key: {Name:mkcc129ed7fd3a94daf755b808df5c2ca7b4f55b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:23.993902  215017 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key.9b81cf43
	I1119 22:38:23.993944  215017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt.9b81cf43 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1119 22:38:24.949512  215017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt.9b81cf43 ...
	I1119 22:38:24.949545  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt.9b81cf43: {Name:mk857e8f674694c0bdb694030b2402c50649af7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:24.949819  215017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key.9b81cf43 ...
	I1119 22:38:24.949838  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key.9b81cf43: {Name:mke1e1b8b382f368b842b0b0ebd43fcff825ce2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:24.949968  215017 certs.go:382] copying /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt.9b81cf43 -> /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt
	I1119 22:38:24.950099  215017 certs.go:386] copying /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key.9b81cf43 -> /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key
	I1119 22:38:24.950220  215017 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.key
	I1119 22:38:24.950254  215017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.crt with IP's: []
	I1119 22:38:25.380015  215017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.crt ...
	I1119 22:38:25.380052  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.crt: {Name:mk60463442a2346a7467c65f294d7610875ba798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:25.381096  215017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.key ...
	I1119 22:38:25.381124  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.key: {Name:mkcc9ad63005e92a3409d0552d96d1073c0ab984 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:25.381427  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem (1338 bytes)
	W1119 22:38:25.381505  215017 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144_empty.pem, impossibly tiny 0 bytes
	I1119 22:38:25.381526  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:38:25.381569  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:38:25.381616  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:38:25.381661  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem (1675 bytes)
	I1119 22:38:25.381777  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:38:25.382497  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:38:25.423747  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1119 22:38:25.460637  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:38:25.483373  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:38:25.503061  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 22:38:25.523436  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:38:25.548990  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:38:25.581396  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:38:25.622314  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem --> /usr/share/ca-certificates/4144.pem (1338 bytes)
	I1119 22:38:25.653452  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /usr/share/ca-certificates/41442.pem (1708 bytes)
	I1119 22:38:25.693769  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:38:25.730224  215017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:38:25.757903  215017 ssh_runner.go:195] Run: openssl version
	I1119 22:38:25.770954  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4144.pem && ln -fs /usr/share/ca-certificates/4144.pem /etc/ssl/certs/4144.pem"
	I1119 22:38:25.787344  215017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4144.pem
	I1119 22:38:25.792427  215017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/4144.pem
	I1119 22:38:25.792569  215017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4144.pem
	I1119 22:38:25.854376  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4144.pem /etc/ssl/certs/51391683.0"
	I1119 22:38:25.867349  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41442.pem && ln -fs /usr/share/ca-certificates/41442.pem /etc/ssl/certs/41442.pem"
	I1119 22:38:25.885000  215017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41442.pem
	I1119 22:38:25.895195  215017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/41442.pem
	I1119 22:38:25.895369  215017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41442.pem
	I1119 22:38:25.952771  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41442.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:38:25.969512  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:38:25.988362  215017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:25.994984  215017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:25.995107  215017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:26.054751  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:38:26.081314  215017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:38:26.089485  215017 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:38:26.089616  215017 kubeadm.go:401] StartCluster: {Name:embed-certs-227235 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-227235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:38:26.089729  215017 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:38:26.089883  215017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:38:26.175081  215017 cri.go:89] found id: ""
	I1119 22:38:26.175273  215017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:38:26.201739  215017 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:38:26.213453  215017 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:38:26.213538  215017 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:38:26.227920  215017 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:38:26.227957  215017 kubeadm.go:158] found existing configuration files:
	
	I1119 22:38:26.228016  215017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 22:38:26.238822  215017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:38:26.238956  215017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:38:26.248847  215017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 22:38:26.259874  215017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:38:26.259981  215017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:38:26.269610  215017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 22:38:26.280662  215017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:38:26.280762  215017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:38:26.291067  215017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 22:38:26.299774  215017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:38:26.299863  215017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:38:26.307272  215017 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:38:26.359370  215017 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:38:26.359879  215017 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:38:26.392070  215017 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:38:26.392176  215017 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 22:38:26.392260  215017 kubeadm.go:319] OS: Linux
	I1119 22:38:26.392332  215017 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:38:26.392404  215017 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 22:38:26.392515  215017 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:38:26.392603  215017 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:38:26.392689  215017 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:38:26.392799  215017 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:38:26.392885  215017 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:38:26.392964  215017 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:38:26.393042  215017 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 22:38:26.488613  215017 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:38:26.488982  215017 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:38:26.489119  215017 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:38:26.506528  215017 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:38:26.511504  215017 out.go:252]   - Generating certificates and keys ...
	I1119 22:38:26.511614  215017 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:38:26.511693  215017 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:38:27.434809  215017 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:38:27.852737  215017 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:38:28.219331  215017 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:38:28.667646  215017 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:38:29.503070  215017 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:38:29.503604  215017 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-227235 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:38:29.941520  215017 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:38:29.942072  215017 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-227235 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:38:30.399611  215017 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:38:30.598854  215017 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:38:31.066766  215017 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:38:31.067322  215017 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:38:31.727030  215017 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:38:33.054496  215017 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:38:33.215756  215017 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:38:33.577706  215017 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:38:33.942194  215017 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:38:33.943308  215017 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:38:33.946457  215017 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:38:33.309225  213719 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 10.04648217s
	I1119 22:38:36.096444  213719 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 12.833517484s
	I1119 22:38:37.264214  213719 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 14.001762391s
	I1119 22:38:37.296022  213719 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:38:37.335127  213719 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:38:37.354913  213719 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:38:37.355423  213719 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-570856 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:38:37.372044  213719 kubeadm.go:319] [bootstrap-token] Using token: r8vw8k.tssokqfhghfm62o1
	I1119 22:38:33.949816  215017 out.go:252]   - Booting up control plane ...
	I1119 22:38:33.949930  215017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:38:33.950028  215017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:38:33.951280  215017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:38:33.979582  215017 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:38:33.979702  215017 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:38:33.992539  215017 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:38:33.992652  215017 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:38:33.992697  215017 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:38:34.209173  215017 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:38:34.209304  215017 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:38:35.710488  215017 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501663596s
	I1119 22:38:35.713801  215017 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:38:35.714133  215017 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1119 22:38:35.714829  215017 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:38:35.715359  215017 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:38:37.374987  213719 out.go:252]   - Configuring RBAC rules ...
	I1119 22:38:37.375116  213719 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:38:37.383216  213719 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:38:37.395526  213719 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:38:37.407816  213719 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:38:37.414859  213719 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:38:37.420042  213719 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:38:37.672205  213719 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:38:38.187591  213719 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:38:38.676130  213719 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:38:38.677635  213719 kubeadm.go:319] 
	I1119 22:38:38.677723  213719 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:38:38.677730  213719 kubeadm.go:319] 
	I1119 22:38:38.677810  213719 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:38:38.677815  213719 kubeadm.go:319] 
	I1119 22:38:38.677841  213719 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:38:38.678403  213719 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:38:38.678471  213719 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:38:38.678477  213719 kubeadm.go:319] 
	I1119 22:38:38.678533  213719 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:38:38.678538  213719 kubeadm.go:319] 
	I1119 22:38:38.678587  213719 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:38:38.678591  213719 kubeadm.go:319] 
	I1119 22:38:38.678645  213719 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:38:38.678746  213719 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:38:38.678817  213719 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:38:38.678822  213719 kubeadm.go:319] 
	I1119 22:38:38.679193  213719 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:38:38.679286  213719 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:38:38.679291  213719 kubeadm.go:319] 
	I1119 22:38:38.679572  213719 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token r8vw8k.tssokqfhghfm62o1 \
	I1119 22:38:38.679686  213719 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f3dc8233c963d7fa33b7a72da6102de3e0dbc1bf6e99b77f8426922389e565f9 \
	I1119 22:38:38.690497  213719 kubeadm.go:319] 	--control-plane 
	I1119 22:38:38.690515  213719 kubeadm.go:319] 
	I1119 22:38:38.690863  213719 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:38:38.690881  213719 kubeadm.go:319] 
	I1119 22:38:38.691192  213719 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token r8vw8k.tssokqfhghfm62o1 \
	I1119 22:38:38.691498  213719 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f3dc8233c963d7fa33b7a72da6102de3e0dbc1bf6e99b77f8426922389e565f9 
	I1119 22:38:38.710307  213719 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 22:38:38.710544  213719 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 22:38:38.710653  213719 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:38:38.710672  213719 cni.go:84] Creating CNI manager for ""
	I1119 22:38:38.710679  213719 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:38:38.713840  213719 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:38:38.716961  213719 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:38:38.736887  213719 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:38:38.736905  213719 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:38:38.789317  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:38:39.400153  213719 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:38:39.400321  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:39.400530  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-570856 minikube.k8s.io/updated_at=2025_11_19T22_38_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=default-k8s-diff-port-570856 minikube.k8s.io/primary=true
	I1119 22:38:39.975271  213719 ops.go:34] apiserver oom_adj: -16
	I1119 22:38:39.975391  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:40.475885  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:40.976254  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:41.475492  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:41.975953  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:42.476216  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:42.976019  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:43.476374  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:43.938571  213719 kubeadm.go:1114] duration metric: took 4.538317084s to wait for elevateKubeSystemPrivileges
	I1119 22:38:43.938601  213719 kubeadm.go:403] duration metric: took 29.168610658s to StartCluster
	I1119 22:38:43.938617  213719 settings.go:142] acquiring lock: {Name:mk5c8f7d46662d574c7e53cf7b09709855a1e14f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:43.938675  213719 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:38:43.939379  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/kubeconfig: {Name:mk670f88d9cb1be22f05f7db4ddcfb97af791e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:43.939602  213719 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:38:43.939699  213719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:38:43.939950  213719 config.go:182] Loaded profile config "default-k8s-diff-port-570856": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:38:43.939984  213719 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:38:43.940039  213719 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-570856"
	I1119 22:38:43.940056  213719 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-570856"
	I1119 22:38:43.940077  213719 host.go:66] Checking if "default-k8s-diff-port-570856" exists ...
	I1119 22:38:43.940595  213719 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-570856"
	I1119 22:38:43.940614  213719 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-570856"
	I1119 22:38:43.940913  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:43.941163  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:43.943262  213719 out.go:179] * Verifying Kubernetes components...
	I1119 22:38:43.946436  213719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:43.988827  213719 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:38:43.992407  213719 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:38:43.992429  213719 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:38:43.992505  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:44.003465  213719 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-570856"
	I1119 22:38:44.003510  213719 host.go:66] Checking if "default-k8s-diff-port-570856" exists ...
	I1119 22:38:44.003968  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:44.031387  213719 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:38:44.031407  213719 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:38:44.031480  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:44.054335  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:44.071105  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:44.576022  213719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:38:44.576179  213719 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:38:44.632284  213719 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:38:44.830916  213719 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:38:45.842317  213719 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.266107104s)
	I1119 22:38:45.843122  213719 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-570856" to be "Ready" ...
	I1119 22:38:45.843439  213719 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.267383122s)
	I1119 22:38:45.843467  213719 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 22:38:45.844308  213719 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.21199704s)
	I1119 22:38:46.281571  213719 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.450606827s)
	I1119 22:38:46.284845  213719 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1119 22:38:46.287763  213719 addons.go:515] duration metric: took 2.347755369s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1119 22:38:46.347624  213719 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-570856" context rescaled to 1 replicas
	I1119 22:38:44.428112  215017 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 8.712323929s
	I1119 22:38:45.320373  215017 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.604203465s
	I1119 22:38:46.717967  215017 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.003347835s
	I1119 22:38:46.741715  215017 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:38:46.757144  215017 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:38:46.772462  215017 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:38:46.772924  215017 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-227235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:38:46.785381  215017 kubeadm.go:319] [bootstrap-token] Using token: ocom7o.y2g4phnwe8gpvos5
	I1119 22:38:46.788355  215017 out.go:252]   - Configuring RBAC rules ...
	I1119 22:38:46.788494  215017 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:38:46.793683  215017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:38:46.802650  215017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:38:46.811439  215017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:38:46.816154  215017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:38:46.823297  215017 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:38:47.128653  215017 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:38:47.591010  215017 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:38:48.125064  215017 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:38:48.126191  215017 kubeadm.go:319] 
	I1119 22:38:48.126264  215017 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:38:48.126270  215017 kubeadm.go:319] 
	I1119 22:38:48.126346  215017 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:38:48.126350  215017 kubeadm.go:319] 
	I1119 22:38:48.126376  215017 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:38:48.126445  215017 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:38:48.126502  215017 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:38:48.126506  215017 kubeadm.go:319] 
	I1119 22:38:48.126560  215017 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:38:48.126564  215017 kubeadm.go:319] 
	I1119 22:38:48.126611  215017 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:38:48.126618  215017 kubeadm.go:319] 
	I1119 22:38:48.126669  215017 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:38:48.126743  215017 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:38:48.126818  215017 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:38:48.126826  215017 kubeadm.go:319] 
	I1119 22:38:48.126910  215017 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:38:48.126985  215017 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:38:48.126989  215017 kubeadm.go:319] 
	I1119 22:38:48.127072  215017 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ocom7o.y2g4phnwe8gpvos5 \
	I1119 22:38:48.127175  215017 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f3dc8233c963d7fa33b7a72da6102de3e0dbc1bf6e99b77f8426922389e565f9 \
	I1119 22:38:48.127195  215017 kubeadm.go:319] 	--control-plane 
	I1119 22:38:48.127200  215017 kubeadm.go:319] 
	I1119 22:38:48.127283  215017 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:38:48.127287  215017 kubeadm.go:319] 
	I1119 22:38:48.127368  215017 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ocom7o.y2g4phnwe8gpvos5 \
	I1119 22:38:48.127478  215017 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f3dc8233c963d7fa33b7a72da6102de3e0dbc1bf6e99b77f8426922389e565f9 
	I1119 22:38:48.131460  215017 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 22:38:48.131800  215017 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 22:38:48.131963  215017 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:38:48.132002  215017 cni.go:84] Creating CNI manager for ""
	I1119 22:38:48.132025  215017 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:38:48.135396  215017 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:38:48.138681  215017 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:38:48.143238  215017 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:38:48.143261  215017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:38:48.157842  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:38:48.509463  215017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:38:48.509605  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:48.509695  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-227235 minikube.k8s.io/updated_at=2025_11_19T22_38_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=embed-certs-227235 minikube.k8s.io/primary=true
	I1119 22:38:48.531347  215017 ops.go:34] apiserver oom_adj: -16
	W1119 22:38:47.847437  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:38:50.346251  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	I1119 22:38:48.707714  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:49.208479  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:49.708331  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:50.207957  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:50.708351  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:51.208551  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:51.707874  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:52.208750  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:52.708197  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:52.870444  215017 kubeadm.go:1114] duration metric: took 4.360885722s to wait for elevateKubeSystemPrivileges
	I1119 22:38:52.870476  215017 kubeadm.go:403] duration metric: took 26.780891514s to StartCluster
	I1119 22:38:52.870495  215017 settings.go:142] acquiring lock: {Name:mk5c8f7d46662d574c7e53cf7b09709855a1e14f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:52.870563  215017 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:38:52.871877  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/kubeconfig: {Name:mk670f88d9cb1be22f05f7db4ddcfb97af791e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:52.872086  215017 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:38:52.872205  215017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:38:52.872510  215017 config.go:182] Loaded profile config "embed-certs-227235": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:38:52.872559  215017 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:38:52.872623  215017 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-227235"
	I1119 22:38:52.872642  215017 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-227235"
	I1119 22:38:52.872666  215017 host.go:66] Checking if "embed-certs-227235" exists ...
	I1119 22:38:52.873151  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:52.873401  215017 addons.go:70] Setting default-storageclass=true in profile "embed-certs-227235"
	I1119 22:38:52.873423  215017 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-227235"
	I1119 22:38:52.873686  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:52.875844  215017 out.go:179] * Verifying Kubernetes components...
	I1119 22:38:52.879063  215017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:52.907006  215017 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:38:52.909996  215017 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:38:52.910022  215017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:38:52.910096  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:52.917662  215017 addons.go:239] Setting addon default-storageclass=true in "embed-certs-227235"
	I1119 22:38:52.917721  215017 host.go:66] Checking if "embed-certs-227235" exists ...
	I1119 22:38:52.918300  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:52.944204  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:52.957685  215017 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:38:52.957706  215017 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:38:52.957769  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:52.993629  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:53.201073  215017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:38:53.201195  215017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:38:53.314355  215017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:38:53.327779  215017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:38:53.841120  215017 node_ready.go:35] waiting up to 6m0s for node "embed-certs-227235" to be "Ready" ...
	I1119 22:38:53.841457  215017 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1119 22:38:54.280299  215017 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1119 22:38:52.346734  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:38:54.347319  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	I1119 22:38:54.283209  215017 addons.go:515] duration metric: took 1.410633606s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1119 22:38:54.349594  215017 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-227235" context rescaled to 1 replicas
	W1119 22:38:55.844628  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:38:58.344650  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:38:56.846106  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:38:58.846730  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:00.846861  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:00.347351  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:02.844246  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:02.847116  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:05.346461  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:04.845042  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:07.345010  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:07.347215  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:09.846094  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:09.345198  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:11.346411  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:11.846299  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:13.846861  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:16.347393  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:13.844623  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:16.344779  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:18.345372  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:18.846715  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:21.346432  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:20.347964  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:22.843854  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:23.846693  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	I1119 22:39:25.847621  213719 node_ready.go:49] node "default-k8s-diff-port-570856" is "Ready"
	I1119 22:39:25.847652  213719 node_ready.go:38] duration metric: took 40.004497931s for node "default-k8s-diff-port-570856" to be "Ready" ...
	I1119 22:39:25.847666  213719 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:39:25.847724  213719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:39:25.874926  213719 api_server.go:72] duration metric: took 41.935286387s to wait for apiserver process to appear ...
	I1119 22:39:25.874949  213719 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:39:25.874968  213719 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1119 22:39:25.885461  213719 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1119 22:39:25.887414  213719 api_server.go:141] control plane version: v1.34.1
	I1119 22:39:25.887438  213719 api_server.go:131] duration metric: took 12.482962ms to wait for apiserver health ...
	I1119 22:39:25.887448  213719 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:39:25.891159  213719 system_pods.go:59] 8 kube-system pods found
	I1119 22:39:25.891193  213719 system_pods.go:61] "coredns-66bc5c9577-4m8f2" [92627362-0048-4b1a-af4e-7f9d8c53a483] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:25.891200  213719 system_pods.go:61] "etcd-default-k8s-diff-port-570856" [10367870-e3a1-47eb-b3c4-aaa86bcd75fb] Running
	I1119 22:39:25.891207  213719 system_pods.go:61] "kindnet-n8jjs" [f07057ba-2012-4291-ba43-a3638f7c8c58] Running
	I1119 22:39:25.891212  213719 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-570856" [1f655ad0-d00d-452c-84c6-91797dbb8246] Running
	I1119 22:39:25.891217  213719 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-570856" [e70d16a0-455c-4f9d-860d-60b21038f6e6] Running
	I1119 22:39:25.891221  213719 system_pods.go:61] "kube-proxy-n4868" [965b5310-35e9-4026-91b4-733b3eef9088] Running
	I1119 22:39:25.891226  213719 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-570856" [82db77c8-08a3-4917-8b17-c73717e426e2] Running
	I1119 22:39:25.891231  213719 system_pods.go:61] "storage-provisioner" [2339c18e-d677-4777-b9a8-1df877bb86be] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:25.891238  213719 system_pods.go:74] duration metric: took 3.784369ms to wait for pod list to return data ...
	I1119 22:39:25.891248  213719 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:39:25.894907  213719 default_sa.go:45] found service account: "default"
	I1119 22:39:25.894971  213719 default_sa.go:55] duration metric: took 3.716182ms for default service account to be created ...
	I1119 22:39:25.894995  213719 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:39:25.898958  213719 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:25.899042  213719 system_pods.go:89] "coredns-66bc5c9577-4m8f2" [92627362-0048-4b1a-af4e-7f9d8c53a483] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:25.899064  213719 system_pods.go:89] "etcd-default-k8s-diff-port-570856" [10367870-e3a1-47eb-b3c4-aaa86bcd75fb] Running
	I1119 22:39:25.899105  213719 system_pods.go:89] "kindnet-n8jjs" [f07057ba-2012-4291-ba43-a3638f7c8c58] Running
	I1119 22:39:25.899128  213719 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-570856" [1f655ad0-d00d-452c-84c6-91797dbb8246] Running
	I1119 22:39:25.899147  213719 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-570856" [e70d16a0-455c-4f9d-860d-60b21038f6e6] Running
	I1119 22:39:25.899170  213719 system_pods.go:89] "kube-proxy-n4868" [965b5310-35e9-4026-91b4-733b3eef9088] Running
	I1119 22:39:25.899190  213719 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-570856" [82db77c8-08a3-4917-8b17-c73717e426e2] Running
	I1119 22:39:25.899259  213719 system_pods.go:89] "storage-provisioner" [2339c18e-d677-4777-b9a8-1df877bb86be] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:25.899299  213719 retry.go:31] will retry after 294.705373ms: missing components: kube-dns
	I1119 22:39:26.198486  213719 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:26.198523  213719 system_pods.go:89] "coredns-66bc5c9577-4m8f2" [92627362-0048-4b1a-af4e-7f9d8c53a483] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:26.198531  213719 system_pods.go:89] "etcd-default-k8s-diff-port-570856" [10367870-e3a1-47eb-b3c4-aaa86bcd75fb] Running
	I1119 22:39:26.198541  213719 system_pods.go:89] "kindnet-n8jjs" [f07057ba-2012-4291-ba43-a3638f7c8c58] Running
	I1119 22:39:26.198546  213719 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-570856" [1f655ad0-d00d-452c-84c6-91797dbb8246] Running
	I1119 22:39:26.198552  213719 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-570856" [e70d16a0-455c-4f9d-860d-60b21038f6e6] Running
	I1119 22:39:26.198556  213719 system_pods.go:89] "kube-proxy-n4868" [965b5310-35e9-4026-91b4-733b3eef9088] Running
	I1119 22:39:26.198561  213719 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-570856" [82db77c8-08a3-4917-8b17-c73717e426e2] Running
	I1119 22:39:26.198566  213719 system_pods.go:89] "storage-provisioner" [2339c18e-d677-4777-b9a8-1df877bb86be] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:26.198584  213719 retry.go:31] will retry after 303.182095ms: missing components: kube-dns
	I1119 22:39:26.506554  213719 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:26.506591  213719 system_pods.go:89] "coredns-66bc5c9577-4m8f2" [92627362-0048-4b1a-af4e-7f9d8c53a483] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:26.506598  213719 system_pods.go:89] "etcd-default-k8s-diff-port-570856" [10367870-e3a1-47eb-b3c4-aaa86bcd75fb] Running
	I1119 22:39:26.506604  213719 system_pods.go:89] "kindnet-n8jjs" [f07057ba-2012-4291-ba43-a3638f7c8c58] Running
	I1119 22:39:26.506608  213719 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-570856" [1f655ad0-d00d-452c-84c6-91797dbb8246] Running
	I1119 22:39:26.506613  213719 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-570856" [e70d16a0-455c-4f9d-860d-60b21038f6e6] Running
	I1119 22:39:26.506618  213719 system_pods.go:89] "kube-proxy-n4868" [965b5310-35e9-4026-91b4-733b3eef9088] Running
	I1119 22:39:26.506622  213719 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-570856" [82db77c8-08a3-4917-8b17-c73717e426e2] Running
	I1119 22:39:26.506627  213719 system_pods.go:89] "storage-provisioner" [2339c18e-d677-4777-b9a8-1df877bb86be] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:26.506647  213719 retry.go:31] will retry after 472.574028ms: missing components: kube-dns
	I1119 22:39:26.984178  213719 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:26.984212  213719 system_pods.go:89] "coredns-66bc5c9577-4m8f2" [92627362-0048-4b1a-af4e-7f9d8c53a483] Running
	I1119 22:39:26.984220  213719 system_pods.go:89] "etcd-default-k8s-diff-port-570856" [10367870-e3a1-47eb-b3c4-aaa86bcd75fb] Running
	I1119 22:39:26.984226  213719 system_pods.go:89] "kindnet-n8jjs" [f07057ba-2012-4291-ba43-a3638f7c8c58] Running
	I1119 22:39:26.984231  213719 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-570856" [1f655ad0-d00d-452c-84c6-91797dbb8246] Running
	I1119 22:39:26.984235  213719 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-570856" [e70d16a0-455c-4f9d-860d-60b21038f6e6] Running
	I1119 22:39:26.984239  213719 system_pods.go:89] "kube-proxy-n4868" [965b5310-35e9-4026-91b4-733b3eef9088] Running
	I1119 22:39:26.984243  213719 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-570856" [82db77c8-08a3-4917-8b17-c73717e426e2] Running
	I1119 22:39:26.984247  213719 system_pods.go:89] "storage-provisioner" [2339c18e-d677-4777-b9a8-1df877bb86be] Running
	I1119 22:39:26.984255  213719 system_pods.go:126] duration metric: took 1.089240935s to wait for k8s-apps to be running ...
	I1119 22:39:26.984269  213719 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:39:26.984329  213719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:39:26.998904  213719 system_svc.go:56] duration metric: took 14.6234ms WaitForService to wait for kubelet
	I1119 22:39:26.998932  213719 kubeadm.go:587] duration metric: took 43.05929861s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:39:26.998953  213719 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:39:27.002787  213719 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:39:27.003037  213719 node_conditions.go:123] node cpu capacity is 2
	I1119 22:39:27.003065  213719 node_conditions.go:105] duration metric: took 4.106062ms to run NodePressure ...
	I1119 22:39:27.003081  213719 start.go:242] waiting for startup goroutines ...
	I1119 22:39:27.003095  213719 start.go:247] waiting for cluster config update ...
	I1119 22:39:27.003112  213719 start.go:256] writing updated cluster config ...
	I1119 22:39:27.003490  213719 ssh_runner.go:195] Run: rm -f paused
	I1119 22:39:27.008294  213719 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:39:27.012665  213719 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4m8f2" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.017786  213719 pod_ready.go:94] pod "coredns-66bc5c9577-4m8f2" is "Ready"
	I1119 22:39:27.017812  213719 pod_ready.go:86] duration metric: took 5.121391ms for pod "coredns-66bc5c9577-4m8f2" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.020648  213719 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.025936  213719 pod_ready.go:94] pod "etcd-default-k8s-diff-port-570856" is "Ready"
	I1119 22:39:27.026011  213719 pod_ready.go:86] duration metric: took 5.321771ms for pod "etcd-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.028977  213719 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.034047  213719 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-570856" is "Ready"
	I1119 22:39:27.034073  213719 pod_ready.go:86] duration metric: took 5.070216ms for pod "kube-apiserver-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.036706  213719 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.413085  213719 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-570856" is "Ready"
	I1119 22:39:27.413111  213719 pod_ready.go:86] duration metric: took 376.376792ms for pod "kube-controller-manager-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.613330  213719 pod_ready.go:83] waiting for pod "kube-proxy-n4868" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:28.012703  213719 pod_ready.go:94] pod "kube-proxy-n4868" is "Ready"
	I1119 22:39:28.012745  213719 pod_ready.go:86] duration metric: took 399.33038ms for pod "kube-proxy-n4868" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:28.213996  213719 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:28.613271  213719 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-570856" is "Ready"
	I1119 22:39:28.613305  213719 pod_ready.go:86] duration metric: took 399.283191ms for pod "kube-scheduler-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:28.613319  213719 pod_ready.go:40] duration metric: took 1.604992351s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:39:28.668463  213719 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 22:39:28.671810  213719 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-570856" cluster and "default" namespace by default
	W1119 22:39:24.844923  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:26.845154  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:29.344473  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:31.844696  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	I1119 22:39:34.345023  215017 node_ready.go:49] node "embed-certs-227235" is "Ready"
	I1119 22:39:34.345048  215017 node_ready.go:38] duration metric: took 40.503896306s for node "embed-certs-227235" to be "Ready" ...
	I1119 22:39:34.345063  215017 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:39:34.345119  215017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:39:34.362404  215017 api_server.go:72] duration metric: took 41.490288995s to wait for apiserver process to appear ...
	I1119 22:39:34.362426  215017 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:39:34.362445  215017 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:39:34.390640  215017 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 22:39:34.392448  215017 api_server.go:141] control plane version: v1.34.1
	I1119 22:39:34.392508  215017 api_server.go:131] duration metric: took 30.073646ms to wait for apiserver health ...
	I1119 22:39:34.392532  215017 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:39:34.400782  215017 system_pods.go:59] 8 kube-system pods found
	I1119 22:39:34.400862  215017 system_pods.go:61] "coredns-66bc5c9577-6xhjj" [dae34df3-583b-4539-a4d6-78240466e86c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:34.400885  215017 system_pods.go:61] "etcd-embed-certs-227235" [11a732b8-a65d-4a13-8c9f-69b9193419b9] Running
	I1119 22:39:34.400909  215017 system_pods.go:61] "kindnet-v7ws4" [b8f6ea6e-c156-4ce9-9c71-0057f87a1be5] Running
	I1119 22:39:34.400930  215017 system_pods.go:61] "kube-apiserver-embed-certs-227235" [90d0f81c-a22b-4d9a-b5e3-d3b783b345e8] Running
	I1119 22:39:34.400951  215017 system_pods.go:61] "kube-controller-manager-embed-certs-227235" [86f2943e-80a0-4bfc-8764-a48560ccdad9] Running
	I1119 22:39:34.400973  215017 system_pods.go:61] "kube-proxy-plgtr" [6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4] Running
	I1119 22:39:34.400994  215017 system_pods.go:61] "kube-scheduler-embed-certs-227235" [194cd323-d8f6-4a18-9990-931bff9d0b49] Running
	I1119 22:39:34.401017  215017 system_pods.go:61] "storage-provisioner" [dad399ee-80b6-4c16-bed2-296586a544b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:34.401041  215017 system_pods.go:74] duration metric: took 8.489033ms to wait for pod list to return data ...
	I1119 22:39:34.401063  215017 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:39:34.404927  215017 default_sa.go:45] found service account: "default"
	I1119 22:39:34.404991  215017 default_sa.go:55] duration metric: took 3.906002ms for default service account to be created ...
	I1119 22:39:34.405016  215017 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:39:34.408626  215017 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:34.408709  215017 system_pods.go:89] "coredns-66bc5c9577-6xhjj" [dae34df3-583b-4539-a4d6-78240466e86c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:34.408731  215017 system_pods.go:89] "etcd-embed-certs-227235" [11a732b8-a65d-4a13-8c9f-69b9193419b9] Running
	I1119 22:39:34.408754  215017 system_pods.go:89] "kindnet-v7ws4" [b8f6ea6e-c156-4ce9-9c71-0057f87a1be5] Running
	I1119 22:39:34.408780  215017 system_pods.go:89] "kube-apiserver-embed-certs-227235" [90d0f81c-a22b-4d9a-b5e3-d3b783b345e8] Running
	I1119 22:39:34.408803  215017 system_pods.go:89] "kube-controller-manager-embed-certs-227235" [86f2943e-80a0-4bfc-8764-a48560ccdad9] Running
	I1119 22:39:34.408827  215017 system_pods.go:89] "kube-proxy-plgtr" [6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4] Running
	I1119 22:39:34.408848  215017 system_pods.go:89] "kube-scheduler-embed-certs-227235" [194cd323-d8f6-4a18-9990-931bff9d0b49] Running
	I1119 22:39:34.408881  215017 system_pods.go:89] "storage-provisioner" [dad399ee-80b6-4c16-bed2-296586a544b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:34.408920  215017 retry.go:31] will retry after 270.078819ms: missing components: kube-dns
	I1119 22:39:34.682801  215017 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:34.682906  215017 system_pods.go:89] "coredns-66bc5c9577-6xhjj" [dae34df3-583b-4539-a4d6-78240466e86c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:34.682929  215017 system_pods.go:89] "etcd-embed-certs-227235" [11a732b8-a65d-4a13-8c9f-69b9193419b9] Running
	I1119 22:39:34.682965  215017 system_pods.go:89] "kindnet-v7ws4" [b8f6ea6e-c156-4ce9-9c71-0057f87a1be5] Running
	I1119 22:39:34.682988  215017 system_pods.go:89] "kube-apiserver-embed-certs-227235" [90d0f81c-a22b-4d9a-b5e3-d3b783b345e8] Running
	I1119 22:39:34.683010  215017 system_pods.go:89] "kube-controller-manager-embed-certs-227235" [86f2943e-80a0-4bfc-8764-a48560ccdad9] Running
	I1119 22:39:34.683041  215017 system_pods.go:89] "kube-proxy-plgtr" [6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4] Running
	I1119 22:39:34.683064  215017 system_pods.go:89] "kube-scheduler-embed-certs-227235" [194cd323-d8f6-4a18-9990-931bff9d0b49] Running
	I1119 22:39:34.683087  215017 system_pods.go:89] "storage-provisioner" [dad399ee-80b6-4c16-bed2-296586a544b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:34.683118  215017 retry.go:31] will retry after 271.259245ms: missing components: kube-dns
	I1119 22:39:34.958505  215017 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:34.958539  215017 system_pods.go:89] "coredns-66bc5c9577-6xhjj" [dae34df3-583b-4539-a4d6-78240466e86c] Running
	I1119 22:39:34.958547  215017 system_pods.go:89] "etcd-embed-certs-227235" [11a732b8-a65d-4a13-8c9f-69b9193419b9] Running
	I1119 22:39:34.958551  215017 system_pods.go:89] "kindnet-v7ws4" [b8f6ea6e-c156-4ce9-9c71-0057f87a1be5] Running
	I1119 22:39:34.958557  215017 system_pods.go:89] "kube-apiserver-embed-certs-227235" [90d0f81c-a22b-4d9a-b5e3-d3b783b345e8] Running
	I1119 22:39:34.958584  215017 system_pods.go:89] "kube-controller-manager-embed-certs-227235" [86f2943e-80a0-4bfc-8764-a48560ccdad9] Running
	I1119 22:39:34.958595  215017 system_pods.go:89] "kube-proxy-plgtr" [6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4] Running
	I1119 22:39:34.958600  215017 system_pods.go:89] "kube-scheduler-embed-certs-227235" [194cd323-d8f6-4a18-9990-931bff9d0b49] Running
	I1119 22:39:34.958603  215017 system_pods.go:89] "storage-provisioner" [dad399ee-80b6-4c16-bed2-296586a544b5] Running
	I1119 22:39:34.958612  215017 system_pods.go:126] duration metric: took 553.576677ms to wait for k8s-apps to be running ...
	I1119 22:39:34.958625  215017 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:39:34.958694  215017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:39:34.972706  215017 system_svc.go:56] duration metric: took 14.071483ms WaitForService to wait for kubelet
	I1119 22:39:34.972778  215017 kubeadm.go:587] duration metric: took 42.100669257s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:39:34.972814  215017 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:39:34.975990  215017 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:39:34.976072  215017 node_conditions.go:123] node cpu capacity is 2
	I1119 22:39:34.976093  215017 node_conditions.go:105] duration metric: took 3.255435ms to run NodePressure ...
	I1119 22:39:34.976107  215017 start.go:242] waiting for startup goroutines ...
	I1119 22:39:34.976115  215017 start.go:247] waiting for cluster config update ...
	I1119 22:39:34.976126  215017 start.go:256] writing updated cluster config ...
	I1119 22:39:34.976427  215017 ssh_runner.go:195] Run: rm -f paused
	I1119 22:39:34.980344  215017 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:39:34.985616  215017 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6xhjj" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:34.991603  215017 pod_ready.go:94] pod "coredns-66bc5c9577-6xhjj" is "Ready"
	I1119 22:39:34.991644  215017 pod_ready.go:86] duration metric: took 5.99596ms for pod "coredns-66bc5c9577-6xhjj" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:34.994018  215017 pod_ready.go:83] waiting for pod "etcd-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.003190  215017 pod_ready.go:94] pod "etcd-embed-certs-227235" is "Ready"
	I1119 22:39:35.003274  215017 pod_ready.go:86] duration metric: took 9.230481ms for pod "etcd-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.007638  215017 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.016450  215017 pod_ready.go:94] pod "kube-apiserver-embed-certs-227235" is "Ready"
	I1119 22:39:35.016480  215017 pod_ready.go:86] duration metric: took 8.80742ms for pod "kube-apiserver-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.019656  215017 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.385673  215017 pod_ready.go:94] pod "kube-controller-manager-embed-certs-227235" is "Ready"
	I1119 22:39:35.385700  215017 pod_ready.go:86] duration metric: took 365.999627ms for pod "kube-controller-manager-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.584880  215017 pod_ready.go:83] waiting for pod "kube-proxy-plgtr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.984356  215017 pod_ready.go:94] pod "kube-proxy-plgtr" is "Ready"
	I1119 22:39:35.984391  215017 pod_ready.go:86] duration metric: took 399.485083ms for pod "kube-proxy-plgtr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:36.185075  215017 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:36.585576  215017 pod_ready.go:94] pod "kube-scheduler-embed-certs-227235" is "Ready"
	I1119 22:39:36.585603  215017 pod_ready.go:86] duration metric: took 400.501535ms for pod "kube-scheduler-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:36.585617  215017 pod_ready.go:40] duration metric: took 1.605197997s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:39:36.654842  215017 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 22:39:36.659599  215017 out.go:179] * Done! kubectl is now configured to use "embed-certs-227235" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	74e3c4a4051a2       1611cd07b61d5       9 seconds ago        Running             busybox                   0                   1a3da71ef5cbb       busybox                                                default
	3da642e62162f       138784d87c9c5       14 seconds ago       Running             coredns                   0                   ce790582b535e       coredns-66bc5c9577-4m8f2                               kube-system
	ac19323559deb       ba04bb24b9575       14 seconds ago       Running             storage-provisioner       0                   ddab1664cb1b4       storage-provisioner                                    kube-system
	5d9cf5103ba44       05baa95f5142d       56 seconds ago       Running             kube-proxy                0                   dc1d0407b897c       kube-proxy-n4868                                       kube-system
	2644752343f75       b1a8c6f707935       56 seconds ago       Running             kindnet-cni               0                   3be3aa964521e       kindnet-n8jjs                                          kube-system
	829c562f0f222       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   58e7f16f559de       kube-apiserver-default-k8s-diff-port-570856            kube-system
	e4c4039c8a727       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   be66d6bc890de       kube-scheduler-default-k8s-diff-port-570856            kube-system
	7036e1f00cb91       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   78cab9510dd24       kube-controller-manager-default-k8s-diff-port-570856   kube-system
	7d268decdd0d9       a1894772a478e       About a minute ago   Running             etcd                      0                   1f7b11105786b       etcd-default-k8s-diff-port-570856                      kube-system
	
	
	==> containerd <==
	Nov 19 22:39:25 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:25.901443418Z" level=info msg="connecting to shim ac19323559deb019c92d46623f8f93f141457384cef6ce6e8a9841354bf572f9" address="unix:///run/containerd/s/9a1b16d324b9a671f85f1750ce7f5bb69063a867b33c34598f859921a917a0e3" protocol=ttrpc version=3
	Nov 19 22:39:25 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:25.918541527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4m8f2,Uid:92627362-0048-4b1a-af4e-7f9d8c53a483,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce790582b535e513887bddf96766a9a8ecfd6e0197d7ca84cbf1822f125bf5b1\""
	Nov 19 22:39:25 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:25.929815419Z" level=info msg="CreateContainer within sandbox \"ce790582b535e513887bddf96766a9a8ecfd6e0197d7ca84cbf1822f125bf5b1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 19 22:39:25 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:25.941110193Z" level=info msg="Container 3da642e62162f3b53ab9cca81c09853112a192a439b6cab3c5047ef0a7f63b04: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:39:25 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:25.952176697Z" level=info msg="CreateContainer within sandbox \"ce790582b535e513887bddf96766a9a8ecfd6e0197d7ca84cbf1822f125bf5b1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3da642e62162f3b53ab9cca81c09853112a192a439b6cab3c5047ef0a7f63b04\""
	Nov 19 22:39:25 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:25.953018019Z" level=info msg="StartContainer for \"3da642e62162f3b53ab9cca81c09853112a192a439b6cab3c5047ef0a7f63b04\""
	Nov 19 22:39:25 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:25.954053862Z" level=info msg="connecting to shim 3da642e62162f3b53ab9cca81c09853112a192a439b6cab3c5047ef0a7f63b04" address="unix:///run/containerd/s/c954935da72e80067f478974bf94d1c0e8514a06f70ad40469e0d1a929a88edc" protocol=ttrpc version=3
	Nov 19 22:39:26 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:26.005099021Z" level=info msg="StartContainer for \"ac19323559deb019c92d46623f8f93f141457384cef6ce6e8a9841354bf572f9\" returns successfully"
	Nov 19 22:39:26 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:26.049770008Z" level=info msg="StartContainer for \"3da642e62162f3b53ab9cca81c09853112a192a439b6cab3c5047ef0a7f63b04\" returns successfully"
	Nov 19 22:39:29 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:29.251702501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:7195bbcd-aea0-4b92-b3d2-0e76651191f2,Namespace:default,Attempt:0,}"
	Nov 19 22:39:29 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:29.315085562Z" level=info msg="connecting to shim 1a3da71ef5cbb9437982964f37ad518852f9a8f293e918e817ac128904429709" address="unix:///run/containerd/s/050542b8bd9ad20d514db97fd26aa611141a11e653957fe3d3f85227a6c095b1" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 22:39:29 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:29.392832278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:7195bbcd-aea0-4b92-b3d2-0e76651191f2,Namespace:default,Attempt:0,} returns sandbox id \"1a3da71ef5cbb9437982964f37ad518852f9a8f293e918e817ac128904429709\""
	Nov 19 22:39:29 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:29.397479646Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.542850807Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.544740526Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937188"
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.547404742Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.551765469Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.552307891Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.15462186s"
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.552358025Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.560427616Z" level=info msg="CreateContainer within sandbox \"1a3da71ef5cbb9437982964f37ad518852f9a8f293e918e817ac128904429709\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.578433180Z" level=info msg="Container 74e3c4a4051a25d4276374e92c83daaa0fe5a861a1520699792bcdb502865953: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.589877904Z" level=info msg="CreateContainer within sandbox \"1a3da71ef5cbb9437982964f37ad518852f9a8f293e918e817ac128904429709\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"74e3c4a4051a25d4276374e92c83daaa0fe5a861a1520699792bcdb502865953\""
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.591965631Z" level=info msg="StartContainer for \"74e3c4a4051a25d4276374e92c83daaa0fe5a861a1520699792bcdb502865953\""
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.592950864Z" level=info msg="connecting to shim 74e3c4a4051a25d4276374e92c83daaa0fe5a861a1520699792bcdb502865953" address="unix:///run/containerd/s/050542b8bd9ad20d514db97fd26aa611141a11e653957fe3d3f85227a6c095b1" protocol=ttrpc version=3
	Nov 19 22:39:31 default-k8s-diff-port-570856 containerd[760]: time="2025-11-19T22:39:31.655026562Z" level=info msg="StartContainer for \"74e3c4a4051a25d4276374e92c83daaa0fe5a861a1520699792bcdb502865953\" returns successfully"
	
	
	==> coredns [3da642e62162f3b53ab9cca81c09853112a192a439b6cab3c5047ef0a7f63b04] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41400 - 52734 "HINFO IN 3852003297008482046.8189843040733732678. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014915759s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-570856
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-570856
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=default-k8s-diff-port-570856
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_38_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:38:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-570856
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:39:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:39:40 +0000   Wed, 19 Nov 2025 22:38:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:39:40 +0000   Wed, 19 Nov 2025 22:38:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:39:40 +0000   Wed, 19 Nov 2025 22:38:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:39:40 +0000   Wed, 19 Nov 2025 22:39:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-570856
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                78e41195-0661-4dc0-9108-7c4f38576a10
	  Boot ID:                    b3875353-65b3-44b7-ad72-afadd7e2486a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-4m8f2                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     57s
	  kube-system                 etcd-default-k8s-diff-port-570856                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-n8jjs                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-default-k8s-diff-port-570856             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-570856    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-n4868                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-default-k8s-diff-port-570856             100m (5%)     0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 55s                kube-proxy       
	  Normal   NodeHasSufficientMemory  77s (x8 over 77s)  kubelet          Node default-k8s-diff-port-570856 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    77s (x8 over 77s)  kubelet          Node default-k8s-diff-port-570856 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     77s (x7 over 77s)  kubelet          Node default-k8s-diff-port-570856 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  77s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  62s                kubelet          Node default-k8s-diff-port-570856 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s                kubelet          Node default-k8s-diff-port-570856 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s                kubelet          Node default-k8s-diff-port-570856 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           58s                node-controller  Node default-k8s-diff-port-570856 event: Registered Node default-k8s-diff-port-570856 in Controller
	  Normal   NodeReady                15s                kubelet          Node default-k8s-diff-port-570856 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 21:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.032038] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[Nov19 21:18] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034282] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.730183] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.763794] kauditd_printk_skb: 36 callbacks suppressed
	[Nov19 21:50] hrtimer: interrupt took 11278311 ns
	
	
	==> etcd [7d268decdd0d9cc7d8445383e18deefcb2546926ad65b92e663c16dceaf5dba7] <==
	{"level":"warn","ts":"2025-11-19T22:38:31.510957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:31.580152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:31.654368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:31.682405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:31.718538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:31.769872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:31.826650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:31.853528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:31.907149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:31.947648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.007348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.040302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.089065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.142075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.187573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.246429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.284408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.356941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.395060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.438957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.484246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.534361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.578753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.604360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:32.830345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40306","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:39:40 up  1:21,  0 user,  load average: 3.30, 3.50, 2.87
	Linux default-k8s-diff-port-570856 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2644752343f75da9f774078a18f0ed03507320888681802aff4255970379b716] <==
	I1119 22:38:45.010942       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:38:45.087030       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 22:38:45.087210       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:38:45.087227       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:38:45.087242       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:38:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:38:45.313290       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:38:45.313311       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:38:45.313320       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:38:45.313677       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 22:39:15.312877       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 22:39:15.314028       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 22:39:15.314029       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 22:39:15.314107       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 22:39:16.513503       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:39:16.513537       1 metrics.go:72] Registering metrics
	I1119 22:39:16.513640       1 controller.go:711] "Syncing nftables rules"
	I1119 22:39:25.320002       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:39:25.320062       1 main.go:301] handling current node
	I1119 22:39:35.314240       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:39:35.314284       1 main.go:301] handling current node
	
	
	==> kube-apiserver [829c562f0f222bdcf3d0ec71ce8bbf82154469b6f01b9b3c5618df7fe63640f4] <==
	E1119 22:38:34.958031       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1119 22:38:35.005559       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:38:35.012650       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:38:35.037488       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:38:35.037789       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:38:35.053593       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:38:35.131021       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:38:35.275880       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:38:35.308030       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:38:35.308063       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:38:36.749579       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:38:36.813858       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:38:36.927509       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:38:36.935864       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 22:38:36.937220       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:38:36.950062       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:38:37.527347       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:38:38.153893       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:38:38.184681       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:38:38.203607       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:38:42.847411       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:38:42.885704       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:38:43.124192       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:38:43.528611       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1119 22:39:37.108049       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:59210: use of closed network connection
	
	
	==> kube-controller-manager [7036e1f00cb91c3a6b0c190abbd5baf8d233f9500feba9e54c191adab61fd1c6] <==
	I1119 22:38:42.750669       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 22:38:42.751037       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 22:38:42.751203       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 22:38:42.751488       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 22:38:42.751666       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 22:38:42.752429       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 22:38:42.752649       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:38:42.752805       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-570856"
	I1119 22:38:42.752893       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 22:38:42.755783       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:38:42.774721       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 22:38:42.777074       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:38:42.777259       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:38:42.777343       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:38:42.777467       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 22:38:42.778365       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 22:38:42.778606       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:38:42.778753       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 22:38:42.786198       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 22:38:42.791491       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-570856" podCIDRs=["10.244.0.0/24"]
	I1119 22:38:42.794386       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 22:38:42.801349       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:38:42.801772       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 22:38:42.831209       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:39:27.758901       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5d9cf5103ba441828281ac1312821dc9fdde8384b738c12b5a727db2c33097e1] <==
	I1119 22:38:45.087805       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:38:45.270816       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:38:45.374058       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:38:45.374105       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 22:38:45.374212       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:38:45.436313       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:38:45.436365       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:38:45.446434       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:38:45.446745       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:38:45.446760       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:38:45.449425       1 config.go:200] "Starting service config controller"
	I1119 22:38:45.449436       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:38:45.449453       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:38:45.449458       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:38:45.449468       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:38:45.449471       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:38:45.462358       1 config.go:309] "Starting node config controller"
	I1119 22:38:45.462381       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:38:45.462390       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:38:45.550476       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:38:45.550514       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:38:45.550554       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e4c4039c8a727b705651ad9bb3ca2fec84f852b52718607b609d2e5e58012bc1] <==
	I1119 22:38:35.971471       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:38:35.992815       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:38:35.993034       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:38:35.993372       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 22:38:35.994099       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1119 22:38:36.014537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1119 22:38:36.016210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:38:36.021184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:38:36.023913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:38:36.023970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:38:36.024014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:38:36.048310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:38:36.024152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:38:36.024235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:38:36.024272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:38:36.024504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:38:36.024562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:38:36.051622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:38:36.051800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:38:36.051966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:38:36.052611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:38:36.052983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:38:36.024051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:38:36.053213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1119 22:38:36.993853       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:38:39 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:39.451867    1482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-570856" podStartSLOduration=3.451847081 podStartE2EDuration="3.451847081s" podCreationTimestamp="2025-11-19 22:38:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:38:39.429474892 +0000 UTC m=+1.326808871" watchObservedRunningTime="2025-11-19 22:38:39.451847081 +0000 UTC m=+1.349181051"
	Nov 19 22:38:39 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:39.493417    1482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-570856" podStartSLOduration=1.493398569 podStartE2EDuration="1.493398569s" podCreationTimestamp="2025-11-19 22:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:38:39.457149448 +0000 UTC m=+1.354483435" watchObservedRunningTime="2025-11-19 22:38:39.493398569 +0000 UTC m=+1.390732556"
	Nov 19 22:38:39 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:39.538843    1482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-570856" podStartSLOduration=1.538821008 podStartE2EDuration="1.538821008s" podCreationTimestamp="2025-11-19 22:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:38:39.496719343 +0000 UTC m=+1.394053331" watchObservedRunningTime="2025-11-19 22:38:39.538821008 +0000 UTC m=+1.436154979"
	Nov 19 22:38:39 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:39.539228    1482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-570856" podStartSLOduration=1.5392188629999999 podStartE2EDuration="1.539218863s" podCreationTimestamp="2025-11-19 22:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:38:39.531280931 +0000 UTC m=+1.428614918" watchObservedRunningTime="2025-11-19 22:38:39.539218863 +0000 UTC m=+1.436552842"
	Nov 19 22:38:42 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:42.878403    1482 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 22:38:42 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:42.886355    1482 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 22:38:43 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:43.650379    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f07057ba-2012-4291-ba43-a3638f7c8c58-cni-cfg\") pod \"kindnet-n8jjs\" (UID: \"f07057ba-2012-4291-ba43-a3638f7c8c58\") " pod="kube-system/kindnet-n8jjs"
	Nov 19 22:38:43 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:43.658326    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/965b5310-35e9-4026-91b4-733b3eef9088-lib-modules\") pod \"kube-proxy-n4868\" (UID: \"965b5310-35e9-4026-91b4-733b3eef9088\") " pod="kube-system/kube-proxy-n4868"
	Nov 19 22:38:43 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:43.658527    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrzn6\" (UniqueName: \"kubernetes.io/projected/965b5310-35e9-4026-91b4-733b3eef9088-kube-api-access-xrzn6\") pod \"kube-proxy-n4868\" (UID: \"965b5310-35e9-4026-91b4-733b3eef9088\") " pod="kube-system/kube-proxy-n4868"
	Nov 19 22:38:43 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:43.658633    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f07057ba-2012-4291-ba43-a3638f7c8c58-xtables-lock\") pod \"kindnet-n8jjs\" (UID: \"f07057ba-2012-4291-ba43-a3638f7c8c58\") " pod="kube-system/kindnet-n8jjs"
	Nov 19 22:38:43 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:43.658707    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6vsm\" (UniqueName: \"kubernetes.io/projected/f07057ba-2012-4291-ba43-a3638f7c8c58-kube-api-access-p6vsm\") pod \"kindnet-n8jjs\" (UID: \"f07057ba-2012-4291-ba43-a3638f7c8c58\") " pod="kube-system/kindnet-n8jjs"
	Nov 19 22:38:43 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:43.658783    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/965b5310-35e9-4026-91b4-733b3eef9088-kube-proxy\") pod \"kube-proxy-n4868\" (UID: \"965b5310-35e9-4026-91b4-733b3eef9088\") " pod="kube-system/kube-proxy-n4868"
	Nov 19 22:38:43 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:43.658859    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/965b5310-35e9-4026-91b4-733b3eef9088-xtables-lock\") pod \"kube-proxy-n4868\" (UID: \"965b5310-35e9-4026-91b4-733b3eef9088\") " pod="kube-system/kube-proxy-n4868"
	Nov 19 22:38:43 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:43.658928    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f07057ba-2012-4291-ba43-a3638f7c8c58-lib-modules\") pod \"kindnet-n8jjs\" (UID: \"f07057ba-2012-4291-ba43-a3638f7c8c58\") " pod="kube-system/kindnet-n8jjs"
	Nov 19 22:38:43 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:43.822987    1482 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 19 22:38:45 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:45.751893    1482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n4868" podStartSLOduration=2.7518730529999997 podStartE2EDuration="2.751873053s" podCreationTimestamp="2025-11-19 22:38:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:38:45.722613735 +0000 UTC m=+7.619947714" watchObservedRunningTime="2025-11-19 22:38:45.751873053 +0000 UTC m=+7.649207032"
	Nov 19 22:38:48 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:38:48.222608    1482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-n8jjs" podStartSLOduration=5.222591257 podStartE2EDuration="5.222591257s" podCreationTimestamp="2025-11-19 22:38:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:38:45.755083565 +0000 UTC m=+7.652417692" watchObservedRunningTime="2025-11-19 22:38:48.222591257 +0000 UTC m=+10.119925228"
	Nov 19 22:39:25 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:39:25.383137    1482 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 22:39:25 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:39:25.516684    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5tx2\" (UniqueName: \"kubernetes.io/projected/2339c18e-d677-4777-b9a8-1df877bb86be-kube-api-access-c5tx2\") pod \"storage-provisioner\" (UID: \"2339c18e-d677-4777-b9a8-1df877bb86be\") " pod="kube-system/storage-provisioner"
	Nov 19 22:39:25 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:39:25.516924    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2339c18e-d677-4777-b9a8-1df877bb86be-tmp\") pod \"storage-provisioner\" (UID: \"2339c18e-d677-4777-b9a8-1df877bb86be\") " pod="kube-system/storage-provisioner"
	Nov 19 22:39:25 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:39:25.517010    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92627362-0048-4b1a-af4e-7f9d8c53a483-config-volume\") pod \"coredns-66bc5c9577-4m8f2\" (UID: \"92627362-0048-4b1a-af4e-7f9d8c53a483\") " pod="kube-system/coredns-66bc5c9577-4m8f2"
	Nov 19 22:39:25 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:39:25.517053    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7mvf\" (UniqueName: \"kubernetes.io/projected/92627362-0048-4b1a-af4e-7f9d8c53a483-kube-api-access-x7mvf\") pod \"coredns-66bc5c9577-4m8f2\" (UID: \"92627362-0048-4b1a-af4e-7f9d8c53a483\") " pod="kube-system/coredns-66bc5c9577-4m8f2"
	Nov 19 22:39:26 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:39:26.866528    1482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4m8f2" podStartSLOduration=43.866494741 podStartE2EDuration="43.866494741s" podCreationTimestamp="2025-11-19 22:38:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:39:26.845495192 +0000 UTC m=+48.742829171" watchObservedRunningTime="2025-11-19 22:39:26.866494741 +0000 UTC m=+48.763828753"
	Nov 19 22:39:26 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:39:26.867258    1482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.867245969 podStartE2EDuration="40.867245969s" podCreationTimestamp="2025-11-19 22:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:39:26.866303141 +0000 UTC m=+48.763637120" watchObservedRunningTime="2025-11-19 22:39:26.867245969 +0000 UTC m=+48.764579940"
	Nov 19 22:39:29 default-k8s-diff-port-570856 kubelet[1482]: I1119 22:39:29.042907    1482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrdw9\" (UniqueName: \"kubernetes.io/projected/7195bbcd-aea0-4b92-b3d2-0e76651191f2-kube-api-access-qrdw9\") pod \"busybox\" (UID: \"7195bbcd-aea0-4b92-b3d2-0e76651191f2\") " pod="default/busybox"
	
	
	==> storage-provisioner [ac19323559deb019c92d46623f8f93f141457384cef6ce6e8a9841354bf572f9] <==
	I1119 22:39:26.006629       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:39:26.035356       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:39:26.035415       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:39:26.039606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:26.058483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:39:26.058839       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:39:26.065517       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-570856_309ed0e4-ef2c-4f9d-b78b-7da3ba544427!
	I1119 22:39:26.062025       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"27af889f-69f7-4c9e-b758-7ba8f06ea50a", APIVersion:"v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-570856_309ed0e4-ef2c-4f9d-b78b-7da3ba544427 became leader
	W1119 22:39:26.069946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:26.075856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:39:26.166726       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-570856_309ed0e4-ef2c-4f9d-b78b-7da3ba544427!
	W1119 22:39:28.084056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:28.091670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:30.096715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:30.103558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:32.107742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:32.117126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:34.120355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:34.125285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:36.128986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:36.133905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:38.137470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:38.149698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:40.154018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:40.166073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-570856 -n default-k8s-diff-port-570856
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-570856 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-227235 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3a9ffa6e-50c6-4636-a1c1-d3c478e5e486] Pending
helpers_test.go:352: "busybox" [3a9ffa6e-50c6-4636-a1c1-d3c478e5e486] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3a9ffa6e-50c6-4636-a1c1-d3c478e5e486] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004379823s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-227235 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-227235
helpers_test.go:243: (dbg) docker inspect embed-certs-227235:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d6f2464a8f7d819c80fdfae3865e85e0bca84b0d24ccb8841a43ca942eef0d65",
	        "Created": "2025-11-19T22:38:14.89237119Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 216317,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:38:14.9613705Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/d6f2464a8f7d819c80fdfae3865e85e0bca84b0d24ccb8841a43ca942eef0d65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d6f2464a8f7d819c80fdfae3865e85e0bca84b0d24ccb8841a43ca942eef0d65/hostname",
	        "HostsPath": "/var/lib/docker/containers/d6f2464a8f7d819c80fdfae3865e85e0bca84b0d24ccb8841a43ca942eef0d65/hosts",
	        "LogPath": "/var/lib/docker/containers/d6f2464a8f7d819c80fdfae3865e85e0bca84b0d24ccb8841a43ca942eef0d65/d6f2464a8f7d819c80fdfae3865e85e0bca84b0d24ccb8841a43ca942eef0d65-json.log",
	        "Name": "/embed-certs-227235",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-227235:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-227235",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d6f2464a8f7d819c80fdfae3865e85e0bca84b0d24ccb8841a43ca942eef0d65",
	                "LowerDir": "/var/lib/docker/overlay2/9e37529bf823f891186cde56bd1b9c72b6dd472ec161c8780f8b79d02781c89f-init/diff:/var/lib/docker/overlay2/b6ebc9601ea0ae08484f263713f3358dd93f7748ebfafbd9155229908dee9606/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9e37529bf823f891186cde56bd1b9c72b6dd472ec161c8780f8b79d02781c89f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9e37529bf823f891186cde56bd1b9c72b6dd472ec161c8780f8b79d02781c89f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9e37529bf823f891186cde56bd1b9c72b6dd472ec161c8780f8b79d02781c89f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-227235",
	                "Source": "/var/lib/docker/volumes/embed-certs-227235/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-227235",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-227235",
	                "name.minikube.sigs.k8s.io": "embed-certs-227235",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "074ae2b20604f5cf109a5529099b7ca8b9d17e4baf842e9cae7062b942888fd1",
	            "SandboxKey": "/var/run/docker/netns/074ae2b20604",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-227235": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:63:0b:50:12:58",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4579dc366f68625047c4bbef84debda5dfb8e27d05811c5f0c328cdac0d52cd1",
	                    "EndpointID": "0dd96d29824891f26029c1ee4d3ea893734d695b8c5801609f3f1d43d926017b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-227235",
	                        "d6f2464a8f7d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-227235 -n embed-certs-227235
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-227235 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-227235 logs -n 25: (1.179158718s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p cilium-156590 sudo crio config                                                                                                                                                                                                                   │ cilium-156590                │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ delete  │ -p cilium-156590                                                                                                                                                                                                                                    │ cilium-156590                │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p cert-expiration-750367 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-750367       │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ ssh     │ force-systemd-env-388402 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-388402     │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ delete  │ -p force-systemd-env-388402                                                                                                                                                                                                                         │ force-systemd-env-388402     │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p cert-options-815306 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-815306          │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ cert-options-815306 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-815306          │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p cert-options-815306 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-815306          │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ delete  │ -p cert-options-815306                                                                                                                                                                                                                              │ cert-options-815306          │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ start   │ -p old-k8s-version-264160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:36 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-264160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:36 UTC │ 19 Nov 25 22:36 UTC │
	│ stop    │ -p old-k8s-version-264160 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:36 UTC │ 19 Nov 25 22:37 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-264160 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:37 UTC │
	│ start   │ -p old-k8s-version-264160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:37 UTC │
	│ image   │ old-k8s-version-264160 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:37 UTC │
	│ pause   │ -p old-k8s-version-264160 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:37 UTC │
	│ start   │ -p cert-expiration-750367 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-750367       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:38 UTC │
	│ unpause │ -p old-k8s-version-264160 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:37 UTC │
	│ delete  │ -p old-k8s-version-264160                                                                                                                                                                                                                           │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:38 UTC │
	│ delete  │ -p old-k8s-version-264160                                                                                                                                                                                                                           │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:38 UTC │ 19 Nov 25 22:38 UTC │
	│ start   │ -p default-k8s-diff-port-570856 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-570856 │ jenkins │ v1.37.0 │ 19 Nov 25 22:38 UTC │ 19 Nov 25 22:39 UTC │
	│ delete  │ -p cert-expiration-750367                                                                                                                                                                                                                           │ cert-expiration-750367       │ jenkins │ v1.37.0 │ 19 Nov 25 22:38 UTC │ 19 Nov 25 22:38 UTC │
	│ start   │ -p embed-certs-227235 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-227235           │ jenkins │ v1.37.0 │ 19 Nov 25 22:38 UTC │ 19 Nov 25 22:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-570856 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-570856 │ jenkins │ v1.37.0 │ 19 Nov 25 22:39 UTC │ 19 Nov 25 22:39 UTC │
	│ stop    │ -p default-k8s-diff-port-570856 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-570856 │ jenkins │ v1.37.0 │ 19 Nov 25 22:39 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:38:08
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:38:08.697293  215017 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:38:08.704083  215017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:38:08.704139  215017 out.go:374] Setting ErrFile to fd 2...
	I1119 22:38:08.704160  215017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:38:08.706471  215017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
	I1119 22:38:08.707066  215017 out.go:368] Setting JSON to false
	I1119 22:38:08.712552  215017 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4810,"bootTime":1763587079,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1119 22:38:08.712658  215017 start.go:143] virtualization:  
	I1119 22:38:08.726924  215017 out.go:179] * [embed-certs-227235] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:38:08.730374  215017 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:38:08.730495  215017 notify.go:221] Checking for updates...
	I1119 22:38:08.738314  215017 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:38:08.741839  215017 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:38:08.750729  215017 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube
	I1119 22:38:08.753969  215017 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:38:08.758263  215017 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:38:08.761943  215017 config.go:182] Loaded profile config "default-k8s-diff-port-570856": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:38:08.762046  215017 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:38:08.820199  215017 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:38:08.820314  215017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:38:08.984129  215017 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-19 22:38:08.967483926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:38:08.984262  215017 docker.go:319] overlay module found
	I1119 22:38:08.987717  215017 out.go:179] * Using the docker driver based on user configuration
	I1119 22:38:08.990549  215017 start.go:309] selected driver: docker
	I1119 22:38:08.990571  215017 start.go:930] validating driver "docker" against <nil>
	I1119 22:38:08.990586  215017 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:38:08.991509  215017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:38:09.111798  215017 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-19 22:38:09.089203249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:38:09.111938  215017 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:38:09.112256  215017 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:38:09.116504  215017 out.go:179] * Using Docker driver with root privileges
	I1119 22:38:09.124274  215017 cni.go:84] Creating CNI manager for ""
	I1119 22:38:09.124350  215017 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:38:09.124363  215017 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:38:09.124453  215017 start.go:353] cluster config:
	{Name:embed-certs-227235 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-227235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:38:09.127735  215017 out.go:179] * Starting "embed-certs-227235" primary control-plane node in "embed-certs-227235" cluster
	I1119 22:38:09.130607  215017 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 22:38:09.133523  215017 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:38:09.136391  215017 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:38:09.136441  215017 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1119 22:38:09.136452  215017 cache.go:65] Caching tarball of preloaded images
	I1119 22:38:09.136462  215017 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:38:09.136539  215017 preload.go:238] Found /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1119 22:38:09.136547  215017 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1119 22:38:09.136651  215017 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/config.json ...
	I1119 22:38:09.136675  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/config.json: {Name:mk1b25f2623abcf89d25348624125d2f29b1b611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:09.183694  215017 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:38:09.183719  215017 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:38:09.183733  215017 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:38:09.183759  215017 start.go:360] acquireMachinesLock for embed-certs-227235: {Name:mk510c3d29263bf54ad7e262aba43b0a3739a3e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:38:09.184753  215017 start.go:364] duration metric: took 969.151µs to acquireMachinesLock for "embed-certs-227235"
	I1119 22:38:09.184791  215017 start.go:93] Provisioning new machine with config: &{Name:embed-certs-227235 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-227235 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:38:09.184859  215017 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:38:07.391014  213719 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-570856:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.786525535s)
	I1119 22:38:07.391041  213719 kic.go:203] duration metric: took 4.786659493s to extract preloaded images to volume ...
	W1119 22:38:07.391183  213719 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 22:38:07.391347  213719 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:38:07.481611  213719 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-570856 --name default-k8s-diff-port-570856 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-570856 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-570856 --network default-k8s-diff-port-570856 --ip 192.168.76.2 --volume default-k8s-diff-port-570856:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:38:07.963072  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Running}}
	I1119 22:38:07.992676  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:08.024300  213719 cli_runner.go:164] Run: docker exec default-k8s-diff-port-570856 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:38:08.120309  213719 oci.go:144] the created container "default-k8s-diff-port-570856" has a running status.
	I1119 22:38:08.120344  213719 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa...
	I1119 22:38:09.379092  213719 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:38:09.429394  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:09.452972  213719 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:38:09.452994  213719 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-570856 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:38:09.517582  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:09.543798  213719 machine.go:94] provisionDockerMachine start ...
	I1119 22:38:09.543906  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:09.574203  213719 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:09.574537  213719 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1119 22:38:09.574556  213719 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:38:09.753905  213719 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-570856
	
	I1119 22:38:09.753978  213719 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-570856"
	I1119 22:38:09.754102  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:09.788736  213719 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:09.789069  213719 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1119 22:38:09.789083  213719 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-570856 && echo "default-k8s-diff-port-570856" | sudo tee /etc/hostname
	I1119 22:38:10.027975  213719 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-570856
	
	I1119 22:38:10.028087  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:10.053594  213719 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:10.053941  213719 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1119 22:38:10.053963  213719 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-570856' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-570856/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-570856' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:38:10.228136  213719 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:38:10.228163  213719 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-2347/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-2347/.minikube}
	I1119 22:38:10.228198  213719 ubuntu.go:190] setting up certificates
	I1119 22:38:10.228211  213719 provision.go:84] configureAuth start
	I1119 22:38:10.228271  213719 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-570856
	I1119 22:38:10.260529  213719 provision.go:143] copyHostCerts
	I1119 22:38:10.260589  213719 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem, removing ...
	I1119 22:38:10.260598  213719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem
	I1119 22:38:10.262543  213719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem (1082 bytes)
	I1119 22:38:10.262680  213719 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem, removing ...
	I1119 22:38:10.262696  213719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem
	I1119 22:38:10.262738  213719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem (1123 bytes)
	I1119 22:38:10.262811  213719 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem, removing ...
	I1119 22:38:10.262821  213719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem
	I1119 22:38:10.262848  213719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem (1675 bytes)
	I1119 22:38:10.262912  213719 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-570856 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-570856 localhost minikube]
	I1119 22:38:10.546932  213719 provision.go:177] copyRemoteCerts
	I1119 22:38:10.547006  213719 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:38:10.547053  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:10.566569  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:10.670710  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:38:10.689919  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 22:38:10.709802  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:38:10.729254  213719 provision.go:87] duration metric: took 501.020286ms to configureAuth
	I1119 22:38:10.729341  213719 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:38:10.729558  213719 config.go:182] Loaded profile config "default-k8s-diff-port-570856": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:38:10.729599  213719 machine.go:97] duration metric: took 1.185770725s to provisionDockerMachine
	I1119 22:38:10.729629  213719 client.go:176] duration metric: took 8.893120772s to LocalClient.Create
	I1119 22:38:10.729671  213719 start.go:167] duration metric: took 8.893208625s to libmachine.API.Create "default-k8s-diff-port-570856"
	I1119 22:38:10.729697  213719 start.go:293] postStartSetup for "default-k8s-diff-port-570856" (driver="docker")
	I1119 22:38:10.729723  213719 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:38:10.729835  213719 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:38:10.729907  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:10.749040  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:10.851117  213719 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:38:10.854970  213719 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:38:10.855002  213719 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:38:10.855018  213719 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/addons for local assets ...
	I1119 22:38:10.855073  213719 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/files for local assets ...
	I1119 22:38:10.855157  213719 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem -> 41442.pem in /etc/ssl/certs
	I1119 22:38:10.855262  213719 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:38:10.863647  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:38:10.886722  213719 start.go:296] duration metric: took 156.987573ms for postStartSetup
	I1119 22:38:10.887078  213719 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-570856
	I1119 22:38:10.911718  213719 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/config.json ...
	I1119 22:38:10.911987  213719 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:38:10.912028  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:10.930471  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:11.027896  213719 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:38:11.033540  213719 start.go:128] duration metric: took 9.200775241s to createHost
	I1119 22:38:11.033562  213719 start.go:83] releasing machines lock for "default-k8s-diff-port-570856", held for 9.200980978s
	I1119 22:38:11.033643  213719 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-570856
	I1119 22:38:11.053285  213719 ssh_runner.go:195] Run: cat /version.json
	I1119 22:38:11.053332  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:11.053561  213719 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:38:11.053645  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:11.092834  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:11.096401  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:11.213924  213719 ssh_runner.go:195] Run: systemctl --version
	I1119 22:38:11.315479  213719 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:38:11.320121  213719 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:38:11.320192  213719 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:38:11.356242  213719 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 22:38:11.356267  213719 start.go:496] detecting cgroup driver to use...
	I1119 22:38:11.356302  213719 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 22:38:11.356353  213719 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:38:11.373019  213719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:38:11.387519  213719 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:38:11.387580  213719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:38:11.404728  213719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:38:11.423798  213719 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:38:11.599278  213719 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:38:11.778834  213719 docker.go:234] disabling docker service ...
	I1119 22:38:11.778912  213719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:38:11.811353  213719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:38:11.835015  213719 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:38:11.988384  213719 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:38:12.144244  213719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:38:12.158812  213719 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:38:12.181589  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 22:38:12.191717  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:38:12.200100  213719 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1119 22:38:12.200165  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1119 22:38:12.208392  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:38:12.216869  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:38:12.225624  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:38:12.234125  213719 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:38:12.241943  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:38:12.250703  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:38:12.259235  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:38:12.267694  213719 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:38:12.275336  213719 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:38:12.282663  213719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:12.447019  213719 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:38:12.641085  213719 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:38:12.641164  213719 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:38:12.647323  213719 start.go:564] Will wait 60s for crictl version
	I1119 22:38:12.647400  213719 ssh_runner.go:195] Run: which crictl
	I1119 22:38:12.654067  213719 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:38:12.706495  213719 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:38:12.706598  213719 ssh_runner.go:195] Run: containerd --version
	I1119 22:38:12.728227  213719 ssh_runner.go:195] Run: containerd --version
	I1119 22:38:12.756769  213719 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 22:38:09.188165  215017 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:38:09.188412  215017 start.go:159] libmachine.API.Create for "embed-certs-227235" (driver="docker")
	I1119 22:38:09.188460  215017 client.go:173] LocalClient.Create starting
	I1119 22:38:09.188522  215017 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem
	I1119 22:38:09.188557  215017 main.go:143] libmachine: Decoding PEM data...
	I1119 22:38:09.188575  215017 main.go:143] libmachine: Parsing certificate...
	I1119 22:38:09.188626  215017 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem
	I1119 22:38:09.188645  215017 main.go:143] libmachine: Decoding PEM data...
	I1119 22:38:09.188658  215017 main.go:143] libmachine: Parsing certificate...
	I1119 22:38:09.189025  215017 cli_runner.go:164] Run: docker network inspect embed-certs-227235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:38:09.226353  215017 cli_runner.go:211] docker network inspect embed-certs-227235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:38:09.227297  215017 network_create.go:284] running [docker network inspect embed-certs-227235] to gather additional debugging logs...
	I1119 22:38:09.227404  215017 cli_runner.go:164] Run: docker network inspect embed-certs-227235
	W1119 22:38:09.248961  215017 cli_runner.go:211] docker network inspect embed-certs-227235 returned with exit code 1
	I1119 22:38:09.248988  215017 network_create.go:287] error running [docker network inspect embed-certs-227235]: docker network inspect embed-certs-227235: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-227235 not found
	I1119 22:38:09.249019  215017 network_create.go:289] output of [docker network inspect embed-certs-227235]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-227235 not found
	
	** /stderr **
	I1119 22:38:09.249110  215017 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:38:09.295459  215017 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b0fa93c84379 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:8f:4f:8f:5a:a3} reservation:<nil>}
	I1119 22:38:09.295758  215017 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-141c656f658f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:30:08:ea:1a:b9} reservation:<nil>}
	I1119 22:38:09.296184  215017 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-aae633a5ffae IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:73:d8:2e:30:94} reservation:<nil>}
	I1119 22:38:09.296454  215017 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0f1dbc601a67 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:de:02:5d:17:f2:79} reservation:<nil>}
	I1119 22:38:09.296821  215017 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a30110}
	I1119 22:38:09.296836  215017 network_create.go:124] attempt to create docker network embed-certs-227235 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1119 22:38:09.296890  215017 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-227235 embed-certs-227235
	I1119 22:38:09.389450  215017 network_create.go:108] docker network embed-certs-227235 192.168.85.0/24 created
	I1119 22:38:09.389488  215017 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-227235" container
	I1119 22:38:09.389570  215017 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:38:09.426012  215017 cli_runner.go:164] Run: docker volume create embed-certs-227235 --label name.minikube.sigs.k8s.io=embed-certs-227235 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:38:09.458413  215017 oci.go:103] Successfully created a docker volume embed-certs-227235
	I1119 22:38:09.458493  215017 cli_runner.go:164] Run: docker run --rm --name embed-certs-227235-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-227235 --entrypoint /usr/bin/test -v embed-certs-227235:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:38:10.048314  215017 oci.go:107] Successfully prepared a docker volume embed-certs-227235
	I1119 22:38:10.048380  215017 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:38:10.048394  215017 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 22:38:10.048475  215017 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-227235:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 22:38:12.761129  213719 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-570856 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:38:12.776448  213719 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 22:38:12.782082  213719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:38:12.793881  213719 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-570856 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-570856 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:38:12.794007  213719 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:38:12.794066  213719 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:38:12.828546  213719 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:38:12.828565  213719 containerd.go:534] Images already preloaded, skipping extraction
	I1119 22:38:12.828628  213719 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:38:12.874453  213719 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:38:12.874474  213719 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:38:12.874485  213719 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 containerd true true} ...
	I1119 22:38:12.874575  213719 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-570856 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-570856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:38:12.874636  213719 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:38:12.913225  213719 cni.go:84] Creating CNI manager for ""
	I1119 22:38:12.913245  213719 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:38:12.913259  213719 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:38:12.913282  213719 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-570856 NodeName:default-k8s-diff-port-570856 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:38:12.913398  213719 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-570856"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:38:12.913465  213719 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:38:12.935388  213719 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:38:12.935468  213719 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:38:12.971226  213719 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1119 22:38:13.007966  213719 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:38:13.024911  213719 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2241 bytes)
	I1119 22:38:13.042516  213719 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:38:13.046335  213719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:38:13.059831  213719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:13.191953  213719 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:38:13.211424  213719 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856 for IP: 192.168.76.2
	I1119 22:38:13.211448  213719 certs.go:195] generating shared ca certs ...
	I1119 22:38:13.211464  213719 certs.go:227] acquiring lock for ca certs: {Name:mk76285c445bf14c1e73dedba3201c9181209ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:13.211598  213719 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key
	I1119 22:38:13.211646  213719 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key
	I1119 22:38:13.211656  213719 certs.go:257] generating profile certs ...
	I1119 22:38:13.211720  213719 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.key
	I1119 22:38:13.211738  213719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.crt with IP's: []
	I1119 22:38:13.477759  213719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.crt ...
	I1119 22:38:13.477790  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.crt: {Name:mk4af4f401c57a7635e92da9feef7f2a7cfe3346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:13.477979  213719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.key ...
	I1119 22:38:13.477993  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.key: {Name:mkf947f0bf4e302c69721a8e2f74d4a272d67d4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:13.478093  213719 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key.8301174b
	I1119 22:38:13.478112  213719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt.8301174b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1119 22:38:13.929859  213719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt.8301174b ...
	I1119 22:38:13.929894  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt.8301174b: {Name:mkb8c9d5541b894a86911cf54efc4b7ac6afa1c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:13.930079  213719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key.8301174b ...
	I1119 22:38:13.930094  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key.8301174b: {Name:mk87a24e67d10968973a6f22462b3f5c313a93de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:13.930252  213719 certs.go:382] copying /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt.8301174b -> /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt
	I1119 22:38:13.930347  213719 certs.go:386] copying /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key.8301174b -> /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key
	I1119 22:38:13.930411  213719 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.key
	I1119 22:38:13.930431  213719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.crt with IP's: []
	I1119 22:38:14.332796  213719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.crt ...
	I1119 22:38:14.332825  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.crt: {Name:mkc687d4f88c0016e52dc106cbb67f62cb641716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:14.339910  213719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.key ...
	I1119 22:38:14.339932  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.key: {Name:mk85a94508f4f26fe196530cf3fdf265d53e1f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:14.340150  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem (1338 bytes)
	W1119 22:38:14.340197  213719 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144_empty.pem, impossibly tiny 0 bytes
	I1119 22:38:14.340211  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:38:14.340237  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:38:14.340265  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:38:14.340292  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem (1675 bytes)
	I1119 22:38:14.340340  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:38:14.340962  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:38:14.361559  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1119 22:38:14.382612  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:38:14.402496  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:38:14.420924  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:38:14.441447  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:38:14.460685  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:38:14.479294  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:38:14.497456  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem --> /usr/share/ca-certificates/4144.pem (1338 bytes)
	I1119 22:38:14.516533  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /usr/share/ca-certificates/41442.pem (1708 bytes)
	I1119 22:38:14.535911  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:38:14.553295  213719 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:38:14.567201  213719 ssh_runner.go:195] Run: openssl version
	I1119 22:38:14.573427  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4144.pem && ln -fs /usr/share/ca-certificates/4144.pem /etc/ssl/certs/4144.pem"
	I1119 22:38:14.582011  213719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4144.pem
	I1119 22:38:14.585596  213719 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/4144.pem
	I1119 22:38:14.585711  213719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4144.pem
	I1119 22:38:14.626575  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4144.pem /etc/ssl/certs/51391683.0"
	I1119 22:38:14.635818  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41442.pem && ln -fs /usr/share/ca-certificates/41442.pem /etc/ssl/certs/41442.pem"
	I1119 22:38:14.644258  213719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41442.pem
	I1119 22:38:14.648142  213719 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/41442.pem
	I1119 22:38:14.648249  213719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41442.pem
	I1119 22:38:14.689425  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41442.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:38:14.698767  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:38:14.708989  213719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:14.713003  213719 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:14.713064  213719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:14.755515  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:38:14.766003  213719 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:38:14.769904  213719 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:38:14.769997  213719 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-570856 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-570856 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:38:14.770068  213719 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:38:14.770172  213719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:38:14.831712  213719 cri.go:89] found id: ""
	I1119 22:38:14.831793  213719 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:38:14.844012  213719 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:38:14.859844  213719 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:38:14.859902  213719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:38:14.875606  213719 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:38:14.875626  213719 kubeadm.go:158] found existing configuration files:
	
	I1119 22:38:14.875678  213719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1119 22:38:14.887366  213719 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:38:14.887426  213719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:38:14.898741  213719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1119 22:38:14.907757  213719 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:38:14.907816  213719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:38:14.915056  213719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1119 22:38:14.925190  213719 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:38:14.925246  213719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:38:14.933043  213719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1119 22:38:14.943964  213719 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:38:14.944080  213719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:38:14.956850  213719 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:38:15.022467  213719 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:38:15.022528  213719 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:38:15.074445  213719 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:38:15.074520  213719 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 22:38:15.074585  213719 kubeadm.go:319] OS: Linux
	I1119 22:38:15.074665  213719 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:38:15.074741  213719 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 22:38:15.074834  213719 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:38:15.074895  213719 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:38:15.074955  213719 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:38:15.075040  213719 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:38:15.075127  213719 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:38:15.075186  213719 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:38:15.075235  213719 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 22:38:15.163382  213719 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:38:15.163500  213719 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:38:15.163599  213719 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:38:15.178538  213719 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:38:15.183821  213719 out.go:252]   - Generating certificates and keys ...
	I1119 22:38:15.183926  213719 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:38:15.184002  213719 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:38:16.331729  213719 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:38:14.780147  215017 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-227235:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.73163045s)
	I1119 22:38:14.780195  215017 kic.go:203] duration metric: took 4.731797196s to extract preloaded images to volume ...
	W1119 22:38:14.780320  215017 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 22:38:14.780432  215017 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:38:14.866741  215017 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-227235 --name embed-certs-227235 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-227235 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-227235 --network embed-certs-227235 --ip 192.168.85.2 --volume embed-certs-227235:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:38:15.242087  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Running}}
	I1119 22:38:15.266134  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:15.289559  215017 cli_runner.go:164] Run: docker exec embed-certs-227235 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:38:15.358592  215017 oci.go:144] the created container "embed-certs-227235" has a running status.
	I1119 22:38:15.358618  215017 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa...
	I1119 22:38:16.151858  215017 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:38:16.174089  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:16.193774  215017 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:38:16.193801  215017 kic_runner.go:114] Args: [docker exec --privileged embed-certs-227235 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:38:16.253392  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:16.274685  215017 machine.go:94] provisionDockerMachine start ...
	I1119 22:38:16.274793  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:16.295933  215017 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:16.296265  215017 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1119 22:38:16.296279  215017 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:38:16.296925  215017 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 22:38:16.648850  213719 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:38:17.027534  213719 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:38:17.535405  213719 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:38:18.457071  213719 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:38:18.457651  213719 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-570856 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:38:18.804201  213719 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:38:18.804516  213719 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-570856 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:38:19.251890  213719 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:38:19.443919  213719 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:38:19.989042  213719 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:38:19.989481  213719 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:38:20.248156  213719 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:38:20.575822  213719 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:38:21.322497  213719 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:38:21.582497  213719 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:38:22.046631  213719 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:38:22.048792  213719 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:38:22.056417  213719 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:38:19.458283  215017 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-227235
	
	I1119 22:38:19.458361  215017 ubuntu.go:182] provisioning hostname "embed-certs-227235"
	I1119 22:38:19.458439  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:19.482663  215017 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:19.482955  215017 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1119 22:38:19.482966  215017 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-227235 && echo "embed-certs-227235" | sudo tee /etc/hostname
	I1119 22:38:19.668227  215017 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-227235
	
	I1119 22:38:19.668364  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:19.696161  215017 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:19.696518  215017 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1119 22:38:19.696542  215017 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-227235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-227235/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-227235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:38:19.844090  215017 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:38:19.844206  215017 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-2347/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-2347/.minikube}
	I1119 22:38:19.844292  215017 ubuntu.go:190] setting up certificates
	I1119 22:38:19.844349  215017 provision.go:84] configureAuth start
	I1119 22:38:19.844460  215017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-227235
	I1119 22:38:19.871920  215017 provision.go:143] copyHostCerts
	I1119 22:38:19.871992  215017 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem, removing ...
	I1119 22:38:19.872014  215017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem
	I1119 22:38:19.872097  215017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem (1082 bytes)
	I1119 22:38:19.872221  215017 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem, removing ...
	I1119 22:38:19.872227  215017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem
	I1119 22:38:19.872260  215017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem (1123 bytes)
	I1119 22:38:19.872326  215017 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem, removing ...
	I1119 22:38:19.872335  215017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem
	I1119 22:38:19.872358  215017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem (1675 bytes)
	I1119 22:38:19.872412  215017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem org=jenkins.embed-certs-227235 san=[127.0.0.1 192.168.85.2 embed-certs-227235 localhost minikube]
	I1119 22:38:20.323404  215017 provision.go:177] copyRemoteCerts
	I1119 22:38:20.323526  215017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:38:20.323586  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:20.356892  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:20.470993  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:38:20.504362  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1119 22:38:20.524210  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:38:20.544124  215017 provision.go:87] duration metric: took 699.7216ms to configureAuth
	I1119 22:38:20.544197  215017 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:38:20.544412  215017 config.go:182] Loaded profile config "embed-certs-227235": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:38:20.544464  215017 machine.go:97] duration metric: took 4.26975387s to provisionDockerMachine
	I1119 22:38:20.544486  215017 client.go:176] duration metric: took 11.356016876s to LocalClient.Create
	I1119 22:38:20.544525  215017 start.go:167] duration metric: took 11.356113575s to libmachine.API.Create "embed-certs-227235"
	I1119 22:38:20.544554  215017 start.go:293] postStartSetup for "embed-certs-227235" (driver="docker")
	I1119 22:38:20.544591  215017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:38:20.544678  215017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:38:20.544756  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:20.565300  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:20.667067  215017 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:38:20.670916  215017 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:38:20.670945  215017 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:38:20.670955  215017 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/addons for local assets ...
	I1119 22:38:20.671006  215017 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/files for local assets ...
	I1119 22:38:20.671083  215017 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem -> 41442.pem in /etc/ssl/certs
	I1119 22:38:20.671184  215017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:38:20.680266  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:38:20.699713  215017 start.go:296] duration metric: took 155.103351ms for postStartSetup
	I1119 22:38:20.700150  215017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-227235
	I1119 22:38:20.718277  215017 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/config.json ...
	I1119 22:38:20.718546  215017 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:38:20.718585  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:20.738828  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:20.841296  215017 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:38:20.847214  215017 start.go:128] duration metric: took 11.662337268s to createHost
	I1119 22:38:20.847254  215017 start.go:83] releasing machines lock for "embed-certs-227235", held for 11.662472169s
	I1119 22:38:20.847344  215017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-227235
	I1119 22:38:20.867867  215017 ssh_runner.go:195] Run: cat /version.json
	I1119 22:38:20.867920  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:20.868163  215017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:38:20.868220  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:20.898565  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:20.913281  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:21.018482  215017 ssh_runner.go:195] Run: systemctl --version
	I1119 22:38:21.126924  215017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:38:21.133433  215017 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:38:21.133571  215017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:38:21.174802  215017 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 22:38:21.174882  215017 start.go:496] detecting cgroup driver to use...
	I1119 22:38:21.174939  215017 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 22:38:21.175034  215017 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:38:21.196072  215017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:38:21.213194  215017 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:38:21.213331  215017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:38:21.235649  215017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:38:21.258133  215017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:38:21.407367  215017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:38:21.569958  215017 docker.go:234] disabling docker service ...
	I1119 22:38:21.570075  215017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:38:21.595432  215017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:38:21.609975  215017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:38:21.765673  215017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:38:21.920710  215017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:38:21.936161  215017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:38:21.954615  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 22:38:21.964563  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:38:21.973986  215017 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1119 22:38:21.974106  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1119 22:38:21.983607  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:38:21.993186  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:38:22.003994  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:38:22.014801  215017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:38:22.024224  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:38:22.034441  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:38:22.044428  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:38:22.055950  215017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:38:22.067426  215017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:38:22.076858  215017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:22.269285  215017 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:38:22.431475  215017 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:38:22.431618  215017 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:38:22.438650  215017 start.go:564] Will wait 60s for crictl version
	I1119 22:38:22.438766  215017 ssh_runner.go:195] Run: which crictl
	I1119 22:38:22.442622  215017 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:38:22.484750  215017 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:38:22.484877  215017 ssh_runner.go:195] Run: containerd --version
	I1119 22:38:22.511742  215017 ssh_runner.go:195] Run: containerd --version
	I1119 22:38:22.537445  215017 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 22:38:22.540815  215017 cli_runner.go:164] Run: docker network inspect embed-certs-227235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:38:22.557518  215017 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 22:38:22.561769  215017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:38:22.577497  215017 kubeadm.go:884] updating cluster {Name:embed-certs-227235 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-227235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:38:22.577609  215017 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:38:22.577676  215017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:38:22.612620  215017 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:38:22.612641  215017 containerd.go:534] Images already preloaded, skipping extraction
	I1119 22:38:22.612700  215017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:38:22.639391  215017 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:38:22.639472  215017 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:38:22.639495  215017 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1119 22:38:22.639629  215017 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-227235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-227235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:38:22.639737  215017 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:38:22.675658  215017 cni.go:84] Creating CNI manager for ""
	I1119 22:38:22.675677  215017 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:38:22.675692  215017 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:38:22.675717  215017 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-227235 NodeName:embed-certs-227235 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:38:22.675829  215017 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-227235"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:38:22.675898  215017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:38:22.685785  215017 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:38:22.685854  215017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:38:22.694496  215017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1119 22:38:22.708805  215017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:38:22.723606  215017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1119 22:38:22.738717  215017 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:38:22.742965  215017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:38:22.753270  215017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:22.906872  215017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:38:22.924949  215017 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235 for IP: 192.168.85.2
	I1119 22:38:22.925022  215017 certs.go:195] generating shared ca certs ...
	I1119 22:38:22.925062  215017 certs.go:227] acquiring lock for ca certs: {Name:mk76285c445bf14c1e73dedba3201c9181209ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:22.925256  215017 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key
	I1119 22:38:22.925342  215017 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key
	I1119 22:38:22.925388  215017 certs.go:257] generating profile certs ...
	I1119 22:38:22.925497  215017 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.key
	I1119 22:38:22.925541  215017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.crt with IP's: []
	I1119 22:38:22.060241  213719 out.go:252]   - Booting up control plane ...
	I1119 22:38:22.060350  213719 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:38:22.060434  213719 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:38:22.060504  213719 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:38:22.079017  213719 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:38:22.079368  213719 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:38:22.087584  213719 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:38:22.087933  213719 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:38:22.087982  213719 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:38:22.256548  213719 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:38:22.256676  213719 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:38:23.257718  213719 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001280368s
	I1119 22:38:23.261499  213719 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:38:23.261885  213719 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1119 22:38:23.262185  213719 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:38:23.262436  213719 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:38:23.993413  215017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.crt ...
	I1119 22:38:23.993490  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.crt: {Name:mk9390e430c2adf83fa83c8b0fc6b544e7c6ac73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:23.993723  215017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.key ...
	I1119 22:38:23.993760  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.key: {Name:mkcc129ed7fd3a94daf755b808df5c2ca7b4f55b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:23.993902  215017 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key.9b81cf43
	I1119 22:38:23.993944  215017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt.9b81cf43 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1119 22:38:24.949512  215017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt.9b81cf43 ...
	I1119 22:38:24.949545  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt.9b81cf43: {Name:mk857e8f674694c0bdb694030b2402c50649af7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:24.949819  215017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key.9b81cf43 ...
	I1119 22:38:24.949838  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key.9b81cf43: {Name:mke1e1b8b382f368b842b0b0ebd43fcff825ce2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:24.949968  215017 certs.go:382] copying /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt.9b81cf43 -> /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt
	I1119 22:38:24.950099  215017 certs.go:386] copying /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key.9b81cf43 -> /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key
	I1119 22:38:24.950220  215017 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.key
	I1119 22:38:24.950254  215017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.crt with IP's: []
	I1119 22:38:25.380015  215017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.crt ...
	I1119 22:38:25.380052  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.crt: {Name:mk60463442a2346a7467c65f294d7610875ba798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:25.381096  215017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.key ...
	I1119 22:38:25.381124  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.key: {Name:mkcc9ad63005e92a3409d0552d96d1073c0ab984 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:25.381427  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem (1338 bytes)
	W1119 22:38:25.381505  215017 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144_empty.pem, impossibly tiny 0 bytes
	I1119 22:38:25.381526  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:38:25.381569  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:38:25.381616  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:38:25.381661  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem (1675 bytes)
	I1119 22:38:25.381777  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:38:25.382497  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:38:25.423747  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1119 22:38:25.460637  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:38:25.483373  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:38:25.503061  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 22:38:25.523436  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:38:25.548990  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:38:25.581396  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:38:25.622314  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem --> /usr/share/ca-certificates/4144.pem (1338 bytes)
	I1119 22:38:25.653452  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /usr/share/ca-certificates/41442.pem (1708 bytes)
	I1119 22:38:25.693769  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:38:25.730224  215017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:38:25.757903  215017 ssh_runner.go:195] Run: openssl version
	I1119 22:38:25.770954  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4144.pem && ln -fs /usr/share/ca-certificates/4144.pem /etc/ssl/certs/4144.pem"
	I1119 22:38:25.787344  215017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4144.pem
	I1119 22:38:25.792427  215017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/4144.pem
	I1119 22:38:25.792569  215017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4144.pem
	I1119 22:38:25.854376  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4144.pem /etc/ssl/certs/51391683.0"
	I1119 22:38:25.867349  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41442.pem && ln -fs /usr/share/ca-certificates/41442.pem /etc/ssl/certs/41442.pem"
	I1119 22:38:25.885000  215017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41442.pem
	I1119 22:38:25.895195  215017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/41442.pem
	I1119 22:38:25.895369  215017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41442.pem
	I1119 22:38:25.952771  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41442.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:38:25.969512  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:38:25.988362  215017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:25.994984  215017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:25.995107  215017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:26.054751  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:38:26.081314  215017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:38:26.089485  215017 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:38:26.089616  215017 kubeadm.go:401] StartCluster: {Name:embed-certs-227235 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-227235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:38:26.089729  215017 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:38:26.089883  215017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:38:26.175081  215017 cri.go:89] found id: ""
	I1119 22:38:26.175273  215017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:38:26.201739  215017 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:38:26.213453  215017 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:38:26.213538  215017 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:38:26.227920  215017 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:38:26.227957  215017 kubeadm.go:158] found existing configuration files:
	
	I1119 22:38:26.228016  215017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 22:38:26.238822  215017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:38:26.238956  215017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:38:26.248847  215017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 22:38:26.259874  215017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:38:26.259981  215017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:38:26.269610  215017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 22:38:26.280662  215017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:38:26.280762  215017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:38:26.291067  215017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 22:38:26.299774  215017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:38:26.299863  215017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:38:26.307272  215017 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:38:26.359370  215017 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:38:26.359879  215017 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:38:26.392070  215017 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:38:26.392176  215017 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 22:38:26.392260  215017 kubeadm.go:319] OS: Linux
	I1119 22:38:26.392332  215017 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:38:26.392404  215017 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 22:38:26.392515  215017 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:38:26.392603  215017 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:38:26.392689  215017 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:38:26.392799  215017 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:38:26.392885  215017 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:38:26.392964  215017 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:38:26.393042  215017 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 22:38:26.488613  215017 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:38:26.488982  215017 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:38:26.489119  215017 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:38:26.506528  215017 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:38:26.511504  215017 out.go:252]   - Generating certificates and keys ...
	I1119 22:38:26.511614  215017 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:38:26.511693  215017 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:38:27.434809  215017 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:38:27.852737  215017 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:38:28.219331  215017 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:38:28.667646  215017 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:38:29.503070  215017 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:38:29.503604  215017 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-227235 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:38:29.941520  215017 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:38:29.942072  215017 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-227235 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:38:30.399611  215017 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:38:30.598854  215017 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:38:31.066766  215017 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:38:31.067322  215017 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:38:31.727030  215017 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:38:33.054496  215017 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:38:33.215756  215017 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:38:33.577706  215017 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:38:33.942194  215017 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:38:33.943308  215017 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:38:33.946457  215017 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:38:33.309225  213719 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 10.04648217s
	I1119 22:38:36.096444  213719 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 12.833517484s
	I1119 22:38:37.264214  213719 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 14.001762391s
	I1119 22:38:37.296022  213719 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:38:37.335127  213719 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:38:37.354913  213719 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:38:37.355423  213719 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-570856 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:38:37.372044  213719 kubeadm.go:319] [bootstrap-token] Using token: r8vw8k.tssokqfhghfm62o1
	I1119 22:38:33.949816  215017 out.go:252]   - Booting up control plane ...
	I1119 22:38:33.949930  215017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:38:33.950028  215017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:38:33.951280  215017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:38:33.979582  215017 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:38:33.979702  215017 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:38:33.992539  215017 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:38:33.992652  215017 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:38:33.992697  215017 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:38:34.209173  215017 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:38:34.209304  215017 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:38:35.710488  215017 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501663596s
	I1119 22:38:35.713801  215017 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:38:35.714133  215017 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1119 22:38:35.714829  215017 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:38:35.715359  215017 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:38:37.374987  213719 out.go:252]   - Configuring RBAC rules ...
	I1119 22:38:37.375116  213719 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:38:37.383216  213719 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:38:37.395526  213719 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:38:37.407816  213719 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:38:37.414859  213719 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:38:37.420042  213719 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:38:37.672205  213719 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:38:38.187591  213719 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:38:38.676130  213719 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:38:38.677635  213719 kubeadm.go:319] 
	I1119 22:38:38.677723  213719 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:38:38.677730  213719 kubeadm.go:319] 
	I1119 22:38:38.677810  213719 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:38:38.677815  213719 kubeadm.go:319] 
	I1119 22:38:38.677841  213719 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:38:38.678403  213719 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:38:38.678471  213719 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:38:38.678477  213719 kubeadm.go:319] 
	I1119 22:38:38.678533  213719 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:38:38.678538  213719 kubeadm.go:319] 
	I1119 22:38:38.678587  213719 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:38:38.678591  213719 kubeadm.go:319] 
	I1119 22:38:38.678645  213719 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:38:38.678746  213719 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:38:38.678817  213719 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:38:38.678822  213719 kubeadm.go:319] 
	I1119 22:38:38.679193  213719 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:38:38.679286  213719 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:38:38.679291  213719 kubeadm.go:319] 
	I1119 22:38:38.679572  213719 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token r8vw8k.tssokqfhghfm62o1 \
	I1119 22:38:38.679686  213719 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f3dc8233c963d7fa33b7a72da6102de3e0dbc1bf6e99b77f8426922389e565f9 \
	I1119 22:38:38.690497  213719 kubeadm.go:319] 	--control-plane 
	I1119 22:38:38.690515  213719 kubeadm.go:319] 
	I1119 22:38:38.690863  213719 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:38:38.690881  213719 kubeadm.go:319] 
	I1119 22:38:38.691192  213719 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token r8vw8k.tssokqfhghfm62o1 \
	I1119 22:38:38.691498  213719 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f3dc8233c963d7fa33b7a72da6102de3e0dbc1bf6e99b77f8426922389e565f9 
	I1119 22:38:38.710307  213719 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 22:38:38.710544  213719 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 22:38:38.710653  213719 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:38:38.710672  213719 cni.go:84] Creating CNI manager for ""
	I1119 22:38:38.710679  213719 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:38:38.713840  213719 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:38:38.716961  213719 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:38:38.736887  213719 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:38:38.736905  213719 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:38:38.789317  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:38:39.400153  213719 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:38:39.400321  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:39.400530  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-570856 minikube.k8s.io/updated_at=2025_11_19T22_38_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=default-k8s-diff-port-570856 minikube.k8s.io/primary=true
	I1119 22:38:39.975271  213719 ops.go:34] apiserver oom_adj: -16
	I1119 22:38:39.975391  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:40.475885  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:40.976254  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:41.475492  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:41.975953  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:42.476216  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:42.976019  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:43.476374  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:43.938571  213719 kubeadm.go:1114] duration metric: took 4.538317084s to wait for elevateKubeSystemPrivileges
	I1119 22:38:43.938601  213719 kubeadm.go:403] duration metric: took 29.168610658s to StartCluster
	I1119 22:38:43.938617  213719 settings.go:142] acquiring lock: {Name:mk5c8f7d46662d574c7e53cf7b09709855a1e14f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:43.938675  213719 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:38:43.939379  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/kubeconfig: {Name:mk670f88d9cb1be22f05f7db4ddcfb97af791e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:43.939602  213719 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:38:43.939699  213719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:38:43.939950  213719 config.go:182] Loaded profile config "default-k8s-diff-port-570856": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:38:43.939984  213719 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:38:43.940039  213719 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-570856"
	I1119 22:38:43.940056  213719 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-570856"
	I1119 22:38:43.940077  213719 host.go:66] Checking if "default-k8s-diff-port-570856" exists ...
	I1119 22:38:43.940595  213719 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-570856"
	I1119 22:38:43.940614  213719 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-570856"
	I1119 22:38:43.940913  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:43.941163  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:43.943262  213719 out.go:179] * Verifying Kubernetes components...
	I1119 22:38:43.946436  213719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:43.988827  213719 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:38:43.992407  213719 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:38:43.992429  213719 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:38:43.992505  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:44.003465  213719 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-570856"
	I1119 22:38:44.003510  213719 host.go:66] Checking if "default-k8s-diff-port-570856" exists ...
	I1119 22:38:44.003968  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:44.031387  213719 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:38:44.031407  213719 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:38:44.031480  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:44.054335  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:44.071105  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:44.576022  213719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:38:44.576179  213719 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:38:44.632284  213719 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:38:44.830916  213719 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:38:45.842317  213719 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.266107104s)
	I1119 22:38:45.843122  213719 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-570856" to be "Ready" ...
	I1119 22:38:45.843439  213719 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.267383122s)
	I1119 22:38:45.843467  213719 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 22:38:45.844308  213719 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.21199704s)
	I1119 22:38:46.281571  213719 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.450606827s)
	I1119 22:38:46.284845  213719 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1119 22:38:46.287763  213719 addons.go:515] duration metric: took 2.347755369s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1119 22:38:46.347624  213719 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-570856" context rescaled to 1 replicas
	I1119 22:38:44.428112  215017 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 8.712323929s
	I1119 22:38:45.320373  215017 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.604203465s
	I1119 22:38:46.717967  215017 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.003347835s
	I1119 22:38:46.741715  215017 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:38:46.757144  215017 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:38:46.772462  215017 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:38:46.772924  215017 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-227235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:38:46.785381  215017 kubeadm.go:319] [bootstrap-token] Using token: ocom7o.y2g4phnwe8gpvos5
	I1119 22:38:46.788355  215017 out.go:252]   - Configuring RBAC rules ...
	I1119 22:38:46.788494  215017 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:38:46.793683  215017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:38:46.802650  215017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:38:46.811439  215017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:38:46.816154  215017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:38:46.823297  215017 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:38:47.128653  215017 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:38:47.591010  215017 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:38:48.125064  215017 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:38:48.126191  215017 kubeadm.go:319] 
	I1119 22:38:48.126264  215017 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:38:48.126270  215017 kubeadm.go:319] 
	I1119 22:38:48.126346  215017 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:38:48.126350  215017 kubeadm.go:319] 
	I1119 22:38:48.126376  215017 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:38:48.126445  215017 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:38:48.126502  215017 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:38:48.126506  215017 kubeadm.go:319] 
	I1119 22:38:48.126560  215017 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:38:48.126564  215017 kubeadm.go:319] 
	I1119 22:38:48.126611  215017 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:38:48.126618  215017 kubeadm.go:319] 
	I1119 22:38:48.126669  215017 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:38:48.126743  215017 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:38:48.126818  215017 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:38:48.126826  215017 kubeadm.go:319] 
	I1119 22:38:48.126910  215017 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:38:48.126985  215017 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:38:48.126989  215017 kubeadm.go:319] 
	I1119 22:38:48.127072  215017 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ocom7o.y2g4phnwe8gpvos5 \
	I1119 22:38:48.127175  215017 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f3dc8233c963d7fa33b7a72da6102de3e0dbc1bf6e99b77f8426922389e565f9 \
	I1119 22:38:48.127195  215017 kubeadm.go:319] 	--control-plane 
	I1119 22:38:48.127200  215017 kubeadm.go:319] 
	I1119 22:38:48.127283  215017 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:38:48.127287  215017 kubeadm.go:319] 
	I1119 22:38:48.127368  215017 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ocom7o.y2g4phnwe8gpvos5 \
	I1119 22:38:48.127478  215017 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f3dc8233c963d7fa33b7a72da6102de3e0dbc1bf6e99b77f8426922389e565f9 
	I1119 22:38:48.131460  215017 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 22:38:48.131800  215017 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 22:38:48.131963  215017 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:38:48.132002  215017 cni.go:84] Creating CNI manager for ""
	I1119 22:38:48.132025  215017 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:38:48.135396  215017 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:38:48.138681  215017 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:38:48.143238  215017 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:38:48.143261  215017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:38:48.157842  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:38:48.509463  215017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:38:48.509605  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:48.509695  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-227235 minikube.k8s.io/updated_at=2025_11_19T22_38_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=embed-certs-227235 minikube.k8s.io/primary=true
	I1119 22:38:48.531347  215017 ops.go:34] apiserver oom_adj: -16
	W1119 22:38:47.847437  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:38:50.346251  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	I1119 22:38:48.707714  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:49.208479  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:49.708331  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:50.207957  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:50.708351  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:51.208551  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:51.707874  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:52.208750  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:52.708197  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:52.870444  215017 kubeadm.go:1114] duration metric: took 4.360885722s to wait for elevateKubeSystemPrivileges
	I1119 22:38:52.870476  215017 kubeadm.go:403] duration metric: took 26.780891514s to StartCluster
	I1119 22:38:52.870495  215017 settings.go:142] acquiring lock: {Name:mk5c8f7d46662d574c7e53cf7b09709855a1e14f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:52.870563  215017 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:38:52.871877  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/kubeconfig: {Name:mk670f88d9cb1be22f05f7db4ddcfb97af791e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:52.872086  215017 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:38:52.872205  215017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:38:52.872510  215017 config.go:182] Loaded profile config "embed-certs-227235": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:38:52.872559  215017 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:38:52.872623  215017 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-227235"
	I1119 22:38:52.872642  215017 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-227235"
	I1119 22:38:52.872666  215017 host.go:66] Checking if "embed-certs-227235" exists ...
	I1119 22:38:52.873151  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:52.873401  215017 addons.go:70] Setting default-storageclass=true in profile "embed-certs-227235"
	I1119 22:38:52.873423  215017 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-227235"
	I1119 22:38:52.873686  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:52.875844  215017 out.go:179] * Verifying Kubernetes components...
	I1119 22:38:52.879063  215017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:52.907006  215017 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:38:52.909996  215017 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:38:52.910022  215017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:38:52.910096  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:52.917662  215017 addons.go:239] Setting addon default-storageclass=true in "embed-certs-227235"
	I1119 22:38:52.917721  215017 host.go:66] Checking if "embed-certs-227235" exists ...
	I1119 22:38:52.918300  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:52.944204  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:52.957685  215017 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:38:52.957706  215017 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:38:52.957769  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:52.993629  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:53.201073  215017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:38:53.201195  215017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:38:53.314355  215017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:38:53.327779  215017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:38:53.841120  215017 node_ready.go:35] waiting up to 6m0s for node "embed-certs-227235" to be "Ready" ...
	I1119 22:38:53.841457  215017 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1119 22:38:54.280299  215017 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1119 22:38:52.346734  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:38:54.347319  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	I1119 22:38:54.283209  215017 addons.go:515] duration metric: took 1.410633606s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1119 22:38:54.349594  215017 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-227235" context rescaled to 1 replicas
	W1119 22:38:55.844628  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:38:58.344650  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:38:56.846106  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:38:58.846730  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:00.846861  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:00.347351  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:02.844246  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:02.847116  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:05.346461  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:04.845042  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:07.345010  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:07.347215  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:09.846094  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:09.345198  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:11.346411  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:11.846299  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:13.846861  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:16.347393  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:13.844623  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:16.344779  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:18.345372  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:18.846715  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:21.346432  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:20.347964  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:22.843854  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:23.846693  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	I1119 22:39:25.847621  213719 node_ready.go:49] node "default-k8s-diff-port-570856" is "Ready"
	I1119 22:39:25.847652  213719 node_ready.go:38] duration metric: took 40.004497931s for node "default-k8s-diff-port-570856" to be "Ready" ...
	I1119 22:39:25.847666  213719 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:39:25.847724  213719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:39:25.874926  213719 api_server.go:72] duration metric: took 41.935286387s to wait for apiserver process to appear ...
	I1119 22:39:25.874949  213719 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:39:25.874968  213719 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1119 22:39:25.885461  213719 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1119 22:39:25.887414  213719 api_server.go:141] control plane version: v1.34.1
	I1119 22:39:25.887438  213719 api_server.go:131] duration metric: took 12.482962ms to wait for apiserver health ...
	I1119 22:39:25.887448  213719 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:39:25.891159  213719 system_pods.go:59] 8 kube-system pods found
	I1119 22:39:25.891193  213719 system_pods.go:61] "coredns-66bc5c9577-4m8f2" [92627362-0048-4b1a-af4e-7f9d8c53a483] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:25.891200  213719 system_pods.go:61] "etcd-default-k8s-diff-port-570856" [10367870-e3a1-47eb-b3c4-aaa86bcd75fb] Running
	I1119 22:39:25.891207  213719 system_pods.go:61] "kindnet-n8jjs" [f07057ba-2012-4291-ba43-a3638f7c8c58] Running
	I1119 22:39:25.891212  213719 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-570856" [1f655ad0-d00d-452c-84c6-91797dbb8246] Running
	I1119 22:39:25.891217  213719 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-570856" [e70d16a0-455c-4f9d-860d-60b21038f6e6] Running
	I1119 22:39:25.891221  213719 system_pods.go:61] "kube-proxy-n4868" [965b5310-35e9-4026-91b4-733b3eef9088] Running
	I1119 22:39:25.891226  213719 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-570856" [82db77c8-08a3-4917-8b17-c73717e426e2] Running
	I1119 22:39:25.891231  213719 system_pods.go:61] "storage-provisioner" [2339c18e-d677-4777-b9a8-1df877bb86be] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:25.891238  213719 system_pods.go:74] duration metric: took 3.784369ms to wait for pod list to return data ...
	I1119 22:39:25.891248  213719 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:39:25.894907  213719 default_sa.go:45] found service account: "default"
	I1119 22:39:25.894971  213719 default_sa.go:55] duration metric: took 3.716182ms for default service account to be created ...
	I1119 22:39:25.894995  213719 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:39:25.898958  213719 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:25.899042  213719 system_pods.go:89] "coredns-66bc5c9577-4m8f2" [92627362-0048-4b1a-af4e-7f9d8c53a483] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:25.899064  213719 system_pods.go:89] "etcd-default-k8s-diff-port-570856" [10367870-e3a1-47eb-b3c4-aaa86bcd75fb] Running
	I1119 22:39:25.899105  213719 system_pods.go:89] "kindnet-n8jjs" [f07057ba-2012-4291-ba43-a3638f7c8c58] Running
	I1119 22:39:25.899128  213719 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-570856" [1f655ad0-d00d-452c-84c6-91797dbb8246] Running
	I1119 22:39:25.899147  213719 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-570856" [e70d16a0-455c-4f9d-860d-60b21038f6e6] Running
	I1119 22:39:25.899170  213719 system_pods.go:89] "kube-proxy-n4868" [965b5310-35e9-4026-91b4-733b3eef9088] Running
	I1119 22:39:25.899190  213719 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-570856" [82db77c8-08a3-4917-8b17-c73717e426e2] Running
	I1119 22:39:25.899259  213719 system_pods.go:89] "storage-provisioner" [2339c18e-d677-4777-b9a8-1df877bb86be] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:25.899299  213719 retry.go:31] will retry after 294.705373ms: missing components: kube-dns
	I1119 22:39:26.198486  213719 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:26.198523  213719 system_pods.go:89] "coredns-66bc5c9577-4m8f2" [92627362-0048-4b1a-af4e-7f9d8c53a483] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:26.198531  213719 system_pods.go:89] "etcd-default-k8s-diff-port-570856" [10367870-e3a1-47eb-b3c4-aaa86bcd75fb] Running
	I1119 22:39:26.198541  213719 system_pods.go:89] "kindnet-n8jjs" [f07057ba-2012-4291-ba43-a3638f7c8c58] Running
	I1119 22:39:26.198546  213719 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-570856" [1f655ad0-d00d-452c-84c6-91797dbb8246] Running
	I1119 22:39:26.198552  213719 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-570856" [e70d16a0-455c-4f9d-860d-60b21038f6e6] Running
	I1119 22:39:26.198556  213719 system_pods.go:89] "kube-proxy-n4868" [965b5310-35e9-4026-91b4-733b3eef9088] Running
	I1119 22:39:26.198561  213719 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-570856" [82db77c8-08a3-4917-8b17-c73717e426e2] Running
	I1119 22:39:26.198566  213719 system_pods.go:89] "storage-provisioner" [2339c18e-d677-4777-b9a8-1df877bb86be] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:26.198584  213719 retry.go:31] will retry after 303.182095ms: missing components: kube-dns
	I1119 22:39:26.506554  213719 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:26.506591  213719 system_pods.go:89] "coredns-66bc5c9577-4m8f2" [92627362-0048-4b1a-af4e-7f9d8c53a483] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:26.506598  213719 system_pods.go:89] "etcd-default-k8s-diff-port-570856" [10367870-e3a1-47eb-b3c4-aaa86bcd75fb] Running
	I1119 22:39:26.506604  213719 system_pods.go:89] "kindnet-n8jjs" [f07057ba-2012-4291-ba43-a3638f7c8c58] Running
	I1119 22:39:26.506608  213719 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-570856" [1f655ad0-d00d-452c-84c6-91797dbb8246] Running
	I1119 22:39:26.506613  213719 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-570856" [e70d16a0-455c-4f9d-860d-60b21038f6e6] Running
	I1119 22:39:26.506618  213719 system_pods.go:89] "kube-proxy-n4868" [965b5310-35e9-4026-91b4-733b3eef9088] Running
	I1119 22:39:26.506622  213719 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-570856" [82db77c8-08a3-4917-8b17-c73717e426e2] Running
	I1119 22:39:26.506627  213719 system_pods.go:89] "storage-provisioner" [2339c18e-d677-4777-b9a8-1df877bb86be] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:26.506647  213719 retry.go:31] will retry after 472.574028ms: missing components: kube-dns
	I1119 22:39:26.984178  213719 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:26.984212  213719 system_pods.go:89] "coredns-66bc5c9577-4m8f2" [92627362-0048-4b1a-af4e-7f9d8c53a483] Running
	I1119 22:39:26.984220  213719 system_pods.go:89] "etcd-default-k8s-diff-port-570856" [10367870-e3a1-47eb-b3c4-aaa86bcd75fb] Running
	I1119 22:39:26.984226  213719 system_pods.go:89] "kindnet-n8jjs" [f07057ba-2012-4291-ba43-a3638f7c8c58] Running
	I1119 22:39:26.984231  213719 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-570856" [1f655ad0-d00d-452c-84c6-91797dbb8246] Running
	I1119 22:39:26.984235  213719 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-570856" [e70d16a0-455c-4f9d-860d-60b21038f6e6] Running
	I1119 22:39:26.984239  213719 system_pods.go:89] "kube-proxy-n4868" [965b5310-35e9-4026-91b4-733b3eef9088] Running
	I1119 22:39:26.984243  213719 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-570856" [82db77c8-08a3-4917-8b17-c73717e426e2] Running
	I1119 22:39:26.984247  213719 system_pods.go:89] "storage-provisioner" [2339c18e-d677-4777-b9a8-1df877bb86be] Running
	I1119 22:39:26.984255  213719 system_pods.go:126] duration metric: took 1.089240935s to wait for k8s-apps to be running ...
	I1119 22:39:26.984269  213719 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:39:26.984329  213719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:39:26.998904  213719 system_svc.go:56] duration metric: took 14.6234ms WaitForService to wait for kubelet
	I1119 22:39:26.998932  213719 kubeadm.go:587] duration metric: took 43.05929861s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:39:26.998953  213719 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:39:27.002787  213719 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:39:27.003037  213719 node_conditions.go:123] node cpu capacity is 2
	I1119 22:39:27.003065  213719 node_conditions.go:105] duration metric: took 4.106062ms to run NodePressure ...
	I1119 22:39:27.003081  213719 start.go:242] waiting for startup goroutines ...
	I1119 22:39:27.003095  213719 start.go:247] waiting for cluster config update ...
	I1119 22:39:27.003112  213719 start.go:256] writing updated cluster config ...
	I1119 22:39:27.003490  213719 ssh_runner.go:195] Run: rm -f paused
	I1119 22:39:27.008294  213719 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:39:27.012665  213719 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4m8f2" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.017786  213719 pod_ready.go:94] pod "coredns-66bc5c9577-4m8f2" is "Ready"
	I1119 22:39:27.017812  213719 pod_ready.go:86] duration metric: took 5.121391ms for pod "coredns-66bc5c9577-4m8f2" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.020648  213719 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.025936  213719 pod_ready.go:94] pod "etcd-default-k8s-diff-port-570856" is "Ready"
	I1119 22:39:27.026011  213719 pod_ready.go:86] duration metric: took 5.321771ms for pod "etcd-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.028977  213719 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.034047  213719 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-570856" is "Ready"
	I1119 22:39:27.034073  213719 pod_ready.go:86] duration metric: took 5.070216ms for pod "kube-apiserver-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.036706  213719 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.413085  213719 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-570856" is "Ready"
	I1119 22:39:27.413111  213719 pod_ready.go:86] duration metric: took 376.376792ms for pod "kube-controller-manager-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.613330  213719 pod_ready.go:83] waiting for pod "kube-proxy-n4868" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:28.012703  213719 pod_ready.go:94] pod "kube-proxy-n4868" is "Ready"
	I1119 22:39:28.012745  213719 pod_ready.go:86] duration metric: took 399.33038ms for pod "kube-proxy-n4868" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:28.213996  213719 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:28.613271  213719 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-570856" is "Ready"
	I1119 22:39:28.613305  213719 pod_ready.go:86] duration metric: took 399.283191ms for pod "kube-scheduler-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:28.613319  213719 pod_ready.go:40] duration metric: took 1.604992351s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:39:28.668463  213719 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 22:39:28.671810  213719 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-570856" cluster and "default" namespace by default
	W1119 22:39:24.844923  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:26.845154  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:29.344473  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:31.844696  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	I1119 22:39:34.345023  215017 node_ready.go:49] node "embed-certs-227235" is "Ready"
	I1119 22:39:34.345048  215017 node_ready.go:38] duration metric: took 40.503896306s for node "embed-certs-227235" to be "Ready" ...
	I1119 22:39:34.345063  215017 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:39:34.345119  215017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:39:34.362404  215017 api_server.go:72] duration metric: took 41.490288995s to wait for apiserver process to appear ...
	I1119 22:39:34.362426  215017 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:39:34.362445  215017 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:39:34.390640  215017 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 22:39:34.392448  215017 api_server.go:141] control plane version: v1.34.1
	I1119 22:39:34.392508  215017 api_server.go:131] duration metric: took 30.073646ms to wait for apiserver health ...
	I1119 22:39:34.392532  215017 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:39:34.400782  215017 system_pods.go:59] 8 kube-system pods found
	I1119 22:39:34.400862  215017 system_pods.go:61] "coredns-66bc5c9577-6xhjj" [dae34df3-583b-4539-a4d6-78240466e86c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:34.400885  215017 system_pods.go:61] "etcd-embed-certs-227235" [11a732b8-a65d-4a13-8c9f-69b9193419b9] Running
	I1119 22:39:34.400909  215017 system_pods.go:61] "kindnet-v7ws4" [b8f6ea6e-c156-4ce9-9c71-0057f87a1be5] Running
	I1119 22:39:34.400930  215017 system_pods.go:61] "kube-apiserver-embed-certs-227235" [90d0f81c-a22b-4d9a-b5e3-d3b783b345e8] Running
	I1119 22:39:34.400951  215017 system_pods.go:61] "kube-controller-manager-embed-certs-227235" [86f2943e-80a0-4bfc-8764-a48560ccdad9] Running
	I1119 22:39:34.400973  215017 system_pods.go:61] "kube-proxy-plgtr" [6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4] Running
	I1119 22:39:34.400994  215017 system_pods.go:61] "kube-scheduler-embed-certs-227235" [194cd323-d8f6-4a18-9990-931bff9d0b49] Running
	I1119 22:39:34.401017  215017 system_pods.go:61] "storage-provisioner" [dad399ee-80b6-4c16-bed2-296586a544b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:34.401041  215017 system_pods.go:74] duration metric: took 8.489033ms to wait for pod list to return data ...
	I1119 22:39:34.401063  215017 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:39:34.404927  215017 default_sa.go:45] found service account: "default"
	I1119 22:39:34.404991  215017 default_sa.go:55] duration metric: took 3.906002ms for default service account to be created ...
	I1119 22:39:34.405016  215017 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:39:34.408626  215017 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:34.408709  215017 system_pods.go:89] "coredns-66bc5c9577-6xhjj" [dae34df3-583b-4539-a4d6-78240466e86c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:34.408731  215017 system_pods.go:89] "etcd-embed-certs-227235" [11a732b8-a65d-4a13-8c9f-69b9193419b9] Running
	I1119 22:39:34.408754  215017 system_pods.go:89] "kindnet-v7ws4" [b8f6ea6e-c156-4ce9-9c71-0057f87a1be5] Running
	I1119 22:39:34.408780  215017 system_pods.go:89] "kube-apiserver-embed-certs-227235" [90d0f81c-a22b-4d9a-b5e3-d3b783b345e8] Running
	I1119 22:39:34.408803  215017 system_pods.go:89] "kube-controller-manager-embed-certs-227235" [86f2943e-80a0-4bfc-8764-a48560ccdad9] Running
	I1119 22:39:34.408827  215017 system_pods.go:89] "kube-proxy-plgtr" [6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4] Running
	I1119 22:39:34.408848  215017 system_pods.go:89] "kube-scheduler-embed-certs-227235" [194cd323-d8f6-4a18-9990-931bff9d0b49] Running
	I1119 22:39:34.408881  215017 system_pods.go:89] "storage-provisioner" [dad399ee-80b6-4c16-bed2-296586a544b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:34.408920  215017 retry.go:31] will retry after 270.078819ms: missing components: kube-dns
	I1119 22:39:34.682801  215017 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:34.682906  215017 system_pods.go:89] "coredns-66bc5c9577-6xhjj" [dae34df3-583b-4539-a4d6-78240466e86c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:34.682929  215017 system_pods.go:89] "etcd-embed-certs-227235" [11a732b8-a65d-4a13-8c9f-69b9193419b9] Running
	I1119 22:39:34.682965  215017 system_pods.go:89] "kindnet-v7ws4" [b8f6ea6e-c156-4ce9-9c71-0057f87a1be5] Running
	I1119 22:39:34.682988  215017 system_pods.go:89] "kube-apiserver-embed-certs-227235" [90d0f81c-a22b-4d9a-b5e3-d3b783b345e8] Running
	I1119 22:39:34.683010  215017 system_pods.go:89] "kube-controller-manager-embed-certs-227235" [86f2943e-80a0-4bfc-8764-a48560ccdad9] Running
	I1119 22:39:34.683041  215017 system_pods.go:89] "kube-proxy-plgtr" [6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4] Running
	I1119 22:39:34.683064  215017 system_pods.go:89] "kube-scheduler-embed-certs-227235" [194cd323-d8f6-4a18-9990-931bff9d0b49] Running
	I1119 22:39:34.683087  215017 system_pods.go:89] "storage-provisioner" [dad399ee-80b6-4c16-bed2-296586a544b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:34.683118  215017 retry.go:31] will retry after 271.259245ms: missing components: kube-dns
	I1119 22:39:34.958505  215017 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:34.958539  215017 system_pods.go:89] "coredns-66bc5c9577-6xhjj" [dae34df3-583b-4539-a4d6-78240466e86c] Running
	I1119 22:39:34.958547  215017 system_pods.go:89] "etcd-embed-certs-227235" [11a732b8-a65d-4a13-8c9f-69b9193419b9] Running
	I1119 22:39:34.958551  215017 system_pods.go:89] "kindnet-v7ws4" [b8f6ea6e-c156-4ce9-9c71-0057f87a1be5] Running
	I1119 22:39:34.958557  215017 system_pods.go:89] "kube-apiserver-embed-certs-227235" [90d0f81c-a22b-4d9a-b5e3-d3b783b345e8] Running
	I1119 22:39:34.958584  215017 system_pods.go:89] "kube-controller-manager-embed-certs-227235" [86f2943e-80a0-4bfc-8764-a48560ccdad9] Running
	I1119 22:39:34.958595  215017 system_pods.go:89] "kube-proxy-plgtr" [6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4] Running
	I1119 22:39:34.958600  215017 system_pods.go:89] "kube-scheduler-embed-certs-227235" [194cd323-d8f6-4a18-9990-931bff9d0b49] Running
	I1119 22:39:34.958603  215017 system_pods.go:89] "storage-provisioner" [dad399ee-80b6-4c16-bed2-296586a544b5] Running
	I1119 22:39:34.958612  215017 system_pods.go:126] duration metric: took 553.576677ms to wait for k8s-apps to be running ...
	I1119 22:39:34.958625  215017 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:39:34.958694  215017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:39:34.972706  215017 system_svc.go:56] duration metric: took 14.071483ms WaitForService to wait for kubelet
	I1119 22:39:34.972778  215017 kubeadm.go:587] duration metric: took 42.100669257s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:39:34.972814  215017 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:39:34.975990  215017 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:39:34.976072  215017 node_conditions.go:123] node cpu capacity is 2
	I1119 22:39:34.976093  215017 node_conditions.go:105] duration metric: took 3.255435ms to run NodePressure ...
	I1119 22:39:34.976107  215017 start.go:242] waiting for startup goroutines ...
	I1119 22:39:34.976115  215017 start.go:247] waiting for cluster config update ...
	I1119 22:39:34.976126  215017 start.go:256] writing updated cluster config ...
	I1119 22:39:34.976427  215017 ssh_runner.go:195] Run: rm -f paused
	I1119 22:39:34.980344  215017 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:39:34.985616  215017 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6xhjj" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:34.991603  215017 pod_ready.go:94] pod "coredns-66bc5c9577-6xhjj" is "Ready"
	I1119 22:39:34.991644  215017 pod_ready.go:86] duration metric: took 5.99596ms for pod "coredns-66bc5c9577-6xhjj" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:34.994018  215017 pod_ready.go:83] waiting for pod "etcd-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.003190  215017 pod_ready.go:94] pod "etcd-embed-certs-227235" is "Ready"
	I1119 22:39:35.003274  215017 pod_ready.go:86] duration metric: took 9.230481ms for pod "etcd-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.007638  215017 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.016450  215017 pod_ready.go:94] pod "kube-apiserver-embed-certs-227235" is "Ready"
	I1119 22:39:35.016480  215017 pod_ready.go:86] duration metric: took 8.80742ms for pod "kube-apiserver-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.019656  215017 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.385673  215017 pod_ready.go:94] pod "kube-controller-manager-embed-certs-227235" is "Ready"
	I1119 22:39:35.385700  215017 pod_ready.go:86] duration metric: took 365.999627ms for pod "kube-controller-manager-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.584880  215017 pod_ready.go:83] waiting for pod "kube-proxy-plgtr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.984356  215017 pod_ready.go:94] pod "kube-proxy-plgtr" is "Ready"
	I1119 22:39:35.984391  215017 pod_ready.go:86] duration metric: took 399.485083ms for pod "kube-proxy-plgtr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:36.185075  215017 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:36.585576  215017 pod_ready.go:94] pod "kube-scheduler-embed-certs-227235" is "Ready"
	I1119 22:39:36.585603  215017 pod_ready.go:86] duration metric: took 400.501535ms for pod "kube-scheduler-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:36.585617  215017 pod_ready.go:40] duration metric: took 1.605197997s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:39:36.654842  215017 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 22:39:36.659599  215017 out.go:179] * Done! kubectl is now configured to use "embed-certs-227235" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	ee67ba8ea568c       1611cd07b61d5       6 seconds ago        Running             busybox                   0                   37003496753a7       busybox                                      default
	53dad3142c14c       138784d87c9c5       11 seconds ago       Running             coredns                   0                   da0b810921826       coredns-66bc5c9577-6xhjj                     kube-system
	b65cef45f66bd       ba04bb24b9575       11 seconds ago       Running             storage-provisioner       0                   d4a4a6be4ccbf       storage-provisioner                          kube-system
	d66cb2ea01457       b1a8c6f707935       52 seconds ago       Running             kindnet-cni               0                   9ba83fac00fa8       kindnet-v7ws4                                kube-system
	f093ca4eda738       05baa95f5142d       52 seconds ago       Running             kube-proxy                0                   6634710379274       kube-proxy-plgtr                             kube-system
	355a3fbf79821       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   a298ae6b66aee       kube-scheduler-embed-certs-227235            kube-system
	5756cab0342dc       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   60e7c54e7300a       kube-controller-manager-embed-certs-227235   kube-system
	26aa304b0d835       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   03e03cf9b234c       kube-apiserver-embed-certs-227235            kube-system
	7f78bcd34bd8c       a1894772a478e       About a minute ago   Running             etcd                      0                   a534f5312fa74       etcd-embed-certs-227235                      kube-system
	
	
	==> containerd <==
	Nov 19 22:39:34 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:34.512610224Z" level=info msg="CreateContainer within sandbox \"d4a4a6be4ccbf73dae0a89a16acad744a0433db5d274ee085e18a38df0caa61a\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"b65cef45f66bd8982ce2de4bf0bc496f53a8596537e811864c0880d902519606\""
	Nov 19 22:39:34 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:34.516624412Z" level=info msg="StartContainer for \"b65cef45f66bd8982ce2de4bf0bc496f53a8596537e811864c0880d902519606\""
	Nov 19 22:39:34 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:34.518022457Z" level=info msg="connecting to shim b65cef45f66bd8982ce2de4bf0bc496f53a8596537e811864c0880d902519606" address="unix:///run/containerd/s/f6335249eef8c42f057e0e307b557f0522e7dcc6fe2b9dc74a42b63339e2a0fd" protocol=ttrpc version=3
	Nov 19 22:39:34 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:34.518692553Z" level=info msg="Container 53dad3142c14c1936c21c1a8e5a3059691e20e1d12e01d0dc6871f9c2a992e4d: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:39:34 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:34.529829728Z" level=info msg="CreateContainer within sandbox \"da0b8109218267c7348e795db266384102dddf10049f1e1b26dac80079e3fee5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"53dad3142c14c1936c21c1a8e5a3059691e20e1d12e01d0dc6871f9c2a992e4d\""
	Nov 19 22:39:34 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:34.531154214Z" level=info msg="StartContainer for \"53dad3142c14c1936c21c1a8e5a3059691e20e1d12e01d0dc6871f9c2a992e4d\""
	Nov 19 22:39:34 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:34.534560248Z" level=info msg="connecting to shim 53dad3142c14c1936c21c1a8e5a3059691e20e1d12e01d0dc6871f9c2a992e4d" address="unix:///run/containerd/s/76d845c6b272eda3fabf883a03ae9814aedddbcb2879c6108011877e30153dc2" protocol=ttrpc version=3
	Nov 19 22:39:34 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:34.606842371Z" level=info msg="StartContainer for \"b65cef45f66bd8982ce2de4bf0bc496f53a8596537e811864c0880d902519606\" returns successfully"
	Nov 19 22:39:34 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:34.625693483Z" level=info msg="StartContainer for \"53dad3142c14c1936c21c1a8e5a3059691e20e1d12e01d0dc6871f9c2a992e4d\" returns successfully"
	Nov 19 22:39:37 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:37.208049365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3a9ffa6e-50c6-4636-a1c1-d3c478e5e486,Namespace:default,Attempt:0,}"
	Nov 19 22:39:37 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:37.277326065Z" level=info msg="connecting to shim 37003496753a73cc614d0a48480a70362619f7e965a152e0b09fbb96fa4b572e" address="unix:///run/containerd/s/c255262205f9fa82747cd88f1aa052eb98086c264a51b2d1bf145f7bedbb38d9" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 22:39:37 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:37.354746006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3a9ffa6e-50c6-4636-a1c1-d3c478e5e486,Namespace:default,Attempt:0,} returns sandbox id \"37003496753a73cc614d0a48480a70362619f7e965a152e0b09fbb96fa4b572e\""
	Nov 19 22:39:37 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:37.356927140Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.500199757Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.502289362Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937184"
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.505528239Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.510659346Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.511360399Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.154212382s"
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.511404256Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.529692456Z" level=info msg="CreateContainer within sandbox \"37003496753a73cc614d0a48480a70362619f7e965a152e0b09fbb96fa4b572e\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.551671936Z" level=info msg="Container ee67ba8ea568caaa173a4e1e5d983d36261b41514c58a3e37b3fb43863bda3b6: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.562023692Z" level=info msg="CreateContainer within sandbox \"37003496753a73cc614d0a48480a70362619f7e965a152e0b09fbb96fa4b572e\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"ee67ba8ea568caaa173a4e1e5d983d36261b41514c58a3e37b3fb43863bda3b6\""
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.565781927Z" level=info msg="StartContainer for \"ee67ba8ea568caaa173a4e1e5d983d36261b41514c58a3e37b3fb43863bda3b6\""
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.570702069Z" level=info msg="connecting to shim ee67ba8ea568caaa173a4e1e5d983d36261b41514c58a3e37b3fb43863bda3b6" address="unix:///run/containerd/s/c255262205f9fa82747cd88f1aa052eb98086c264a51b2d1bf145f7bedbb38d9" protocol=ttrpc version=3
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.657908228Z" level=info msg="StartContainer for \"ee67ba8ea568caaa173a4e1e5d983d36261b41514c58a3e37b3fb43863bda3b6\" returns successfully"
	
	
	==> coredns [53dad3142c14c1936c21c1a8e5a3059691e20e1d12e01d0dc6871f9c2a992e4d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53782 - 20487 "HINFO IN 6381140115399585633.8959357964783949944. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036564299s
	
	
	==> describe nodes <==
	Name:               embed-certs-227235
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-227235
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=embed-certs-227235
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_38_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:38:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-227235
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:39:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:39:33 +0000   Wed, 19 Nov 2025 22:38:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:39:33 +0000   Wed, 19 Nov 2025 22:38:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:39:33 +0000   Wed, 19 Nov 2025 22:38:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:39:33 +0000   Wed, 19 Nov 2025 22:39:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-227235
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                2b37cee5-570a-4071-b36f-9658bf43ea86
	  Boot ID:                    b3875353-65b3-44b7-ad72-afadd7e2486a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-6xhjj                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     54s
	  kube-system                 etcd-embed-certs-227235                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         63s
	  kube-system                 kindnet-v7ws4                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-embed-certs-227235             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-embed-certs-227235    200m (10%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-plgtr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-embed-certs-227235             100m (5%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 52s                kube-proxy       
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node embed-certs-227235 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node embed-certs-227235 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x7 over 71s)  kubelet          Node embed-certs-227235 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 59s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node embed-certs-227235 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node embed-certs-227235 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node embed-certs-227235 status is now: NodeHasSufficientPID
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           55s                node-controller  Node embed-certs-227235 event: Registered Node embed-certs-227235 in Controller
	  Normal   NodeReady                13s                kubelet          Node embed-certs-227235 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 21:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.032038] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[Nov19 21:18] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034282] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.730183] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.763794] kauditd_printk_skb: 36 callbacks suppressed
	[Nov19 21:50] hrtimer: interrupt took 11278311 ns
	
	
	==> etcd [7f78bcd34bd8cf5f518e7de427ae0c653aa056c63742361f17c18ddc9bef7867] <==
	{"level":"warn","ts":"2025-11-19T22:38:40.164302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.208976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.258229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.327481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.348337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.396280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.453698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.477257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.515340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.562853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.588935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.655765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.697479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.730813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.777119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.810975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.844661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.888734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.934346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:41.026263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:41.078031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:41.128603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:41.199541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:41.247891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:41.435313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56882","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:39:46 up  1:21,  0 user,  load average: 3.19, 3.48, 2.86
	Linux embed-certs-227235 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d66cb2ea0145794c66d30b2be0902f9b38f2ebe74716d2a2ad609a759721e4ae] <==
	I1119 22:38:53.687785       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:38:53.688070       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 22:38:53.688454       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:38:53.688474       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:38:53.688486       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:38:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:38:53.898518       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:38:53.898539       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:38:53.898549       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:38:53.898685       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 22:39:23.898453       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 22:39:23.898474       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 22:39:23.898521       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 22:39:23.898562       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 22:39:25.499480       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:39:25.499737       1 metrics.go:72] Registering metrics
	I1119 22:39:25.499922       1 controller.go:711] "Syncing nftables rules"
	I1119 22:39:33.903849       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:39:33.903915       1 main.go:301] handling current node
	I1119 22:39:43.898274       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:39:43.898502       1 main.go:301] handling current node
	
	
	==> kube-apiserver [26aa304b0d835bec8feab72e1dec5a663069f487b93f0bc31bc6de599a1474d6] <==
	I1119 22:38:43.349571       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:38:43.385441       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 22:38:43.440406       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:38:43.457455       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:38:43.490037       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:38:43.509338       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:38:43.513909       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:38:43.673518       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:38:43.785068       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:38:43.799372       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:38:46.150556       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:38:46.223603       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:38:46.375325       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:38:46.383969       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1119 22:38:46.385652       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:38:46.395318       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:38:46.475630       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:38:47.560168       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:38:47.589700       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:38:47.601641       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:38:52.287517       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:38:52.298591       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:38:52.330874       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:38:52.579628       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1119 22:39:45.175533       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:43644: use of closed network connection
	
	
	==> kube-controller-manager [5756cab0342dc1679a014cd2d2e99d44d1cffbf30793fae007f64c3e93b0bcbe] <==
	I1119 22:38:51.493209       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 22:38:51.500530       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 22:38:51.500567       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 22:38:51.500582       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 22:38:51.500617       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 22:38:51.504263       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 22:38:51.504588       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:38:51.504720       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 22:38:51.518699       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-227235" podCIDRs=["10.244.0.0/24"]
	I1119 22:38:51.523458       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 22:38:51.523765       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:38:51.523903       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 22:38:51.524564       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 22:38:51.529336       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 22:38:51.530270       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 22:38:51.530800       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 22:38:51.530981       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 22:38:51.531113       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 22:38:51.531363       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 22:38:51.534322       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 22:38:51.531452       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-227235"
	I1119 22:38:51.535267       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 22:38:51.536446       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 22:38:51.536543       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:39:36.542054       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f093ca4eda7387ebeeb9cb96f29d1f576a12fa26db2c80cb49f3ec63e0dd40eb] <==
	I1119 22:38:53.604462       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:38:53.726176       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:38:53.826494       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:38:53.826569       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 22:38:53.826670       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:38:53.925406       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:38:53.925646       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:38:53.931088       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:38:53.931631       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:38:53.931936       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:38:53.933313       1 config.go:200] "Starting service config controller"
	I1119 22:38:53.933469       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:38:53.933564       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:38:53.933643       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:38:53.933743       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:38:53.933803       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:38:53.934603       1 config.go:309] "Starting node config controller"
	I1119 22:38:53.934739       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:38:53.934810       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:38:54.034218       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:38:54.034259       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:38:54.034302       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [355a3fbf7982116247ee00c0e41d1de1cf83a16ecb21b21e955c2526aadd59eb] <==
	I1119 22:38:45.249418       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:38:45.270974       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 22:38:45.271160       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 22:38:45.286301       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:38:45.286428       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1119 22:38:45.301685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:38:45.301971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:38:45.302118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:38:45.302583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:38:45.318021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1119 22:38:45.318642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:38:45.318693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:38:45.318753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:38:45.318805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:38:45.318881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:38:45.320499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:38:45.320794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:38:45.320962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:38:45.321081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:38:45.321131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:38:45.321182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:38:45.321234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:38:45.321341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:38:45.326637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1119 22:38:46.587605       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:38:48 embed-certs-227235 kubelet[1485]: I1119 22:38:48.504829    1485 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 19 22:38:48 embed-certs-227235 kubelet[1485]: I1119 22:38:48.608576    1485 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-embed-certs-227235"
	Nov 19 22:38:48 embed-certs-227235 kubelet[1485]: E1119 22:38:48.624090    1485 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-embed-certs-227235\" already exists" pod="kube-system/kube-scheduler-embed-certs-227235"
	Nov 19 22:38:51 embed-certs-227235 kubelet[1485]: I1119 22:38:51.526197    1485 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 22:38:51 embed-certs-227235 kubelet[1485]: I1119 22:38:51.528099    1485 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 22:38:52 embed-certs-227235 kubelet[1485]: I1119 22:38:52.641908    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbxbb\" (UniqueName: \"kubernetes.io/projected/b8f6ea6e-c156-4ce9-9c71-0057f87a1be5-kube-api-access-bbxbb\") pod \"kindnet-v7ws4\" (UID: \"b8f6ea6e-c156-4ce9-9c71-0057f87a1be5\") " pod="kube-system/kindnet-v7ws4"
	Nov 19 22:38:52 embed-certs-227235 kubelet[1485]: I1119 22:38:52.642431    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4-kube-proxy\") pod \"kube-proxy-plgtr\" (UID: \"6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4\") " pod="kube-system/kube-proxy-plgtr"
	Nov 19 22:38:52 embed-certs-227235 kubelet[1485]: I1119 22:38:52.642521    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4-xtables-lock\") pod \"kube-proxy-plgtr\" (UID: \"6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4\") " pod="kube-system/kube-proxy-plgtr"
	Nov 19 22:38:52 embed-certs-227235 kubelet[1485]: I1119 22:38:52.642598    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4-lib-modules\") pod \"kube-proxy-plgtr\" (UID: \"6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4\") " pod="kube-system/kube-proxy-plgtr"
	Nov 19 22:38:52 embed-certs-227235 kubelet[1485]: I1119 22:38:52.642675    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b8f6ea6e-c156-4ce9-9c71-0057f87a1be5-cni-cfg\") pod \"kindnet-v7ws4\" (UID: \"b8f6ea6e-c156-4ce9-9c71-0057f87a1be5\") " pod="kube-system/kindnet-v7ws4"
	Nov 19 22:38:52 embed-certs-227235 kubelet[1485]: I1119 22:38:52.642746    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8f6ea6e-c156-4ce9-9c71-0057f87a1be5-xtables-lock\") pod \"kindnet-v7ws4\" (UID: \"b8f6ea6e-c156-4ce9-9c71-0057f87a1be5\") " pod="kube-system/kindnet-v7ws4"
	Nov 19 22:38:52 embed-certs-227235 kubelet[1485]: I1119 22:38:52.642814    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8f6ea6e-c156-4ce9-9c71-0057f87a1be5-lib-modules\") pod \"kindnet-v7ws4\" (UID: \"b8f6ea6e-c156-4ce9-9c71-0057f87a1be5\") " pod="kube-system/kindnet-v7ws4"
	Nov 19 22:38:52 embed-certs-227235 kubelet[1485]: I1119 22:38:52.642884    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4lbc\" (UniqueName: \"kubernetes.io/projected/6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4-kube-api-access-h4lbc\") pod \"kube-proxy-plgtr\" (UID: \"6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4\") " pod="kube-system/kube-proxy-plgtr"
	Nov 19 22:38:52 embed-certs-227235 kubelet[1485]: I1119 22:38:52.775317    1485 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 19 22:38:53 embed-certs-227235 kubelet[1485]: I1119 22:38:53.664799    1485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-v7ws4" podStartSLOduration=1.66478017 podStartE2EDuration="1.66478017s" podCreationTimestamp="2025-11-19 22:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:38:53.640982195 +0000 UTC m=+6.277456973" watchObservedRunningTime="2025-11-19 22:38:53.66478017 +0000 UTC m=+6.301254931"
	Nov 19 22:38:53 embed-certs-227235 kubelet[1485]: I1119 22:38:53.664963    1485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-plgtr" podStartSLOduration=1.6649564909999999 podStartE2EDuration="1.664956491s" podCreationTimestamp="2025-11-19 22:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:38:53.664524626 +0000 UTC m=+6.300999387" watchObservedRunningTime="2025-11-19 22:38:53.664956491 +0000 UTC m=+6.301431253"
	Nov 19 22:39:33 embed-certs-227235 kubelet[1485]: I1119 22:39:33.948084    1485 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 22:39:34 embed-certs-227235 kubelet[1485]: I1119 22:39:34.167725    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dad399ee-80b6-4c16-bed2-296586a544b5-tmp\") pod \"storage-provisioner\" (UID: \"dad399ee-80b6-4c16-bed2-296586a544b5\") " pod="kube-system/storage-provisioner"
	Nov 19 22:39:34 embed-certs-227235 kubelet[1485]: I1119 22:39:34.167779    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dae34df3-583b-4539-a4d6-78240466e86c-config-volume\") pod \"coredns-66bc5c9577-6xhjj\" (UID: \"dae34df3-583b-4539-a4d6-78240466e86c\") " pod="kube-system/coredns-66bc5c9577-6xhjj"
	Nov 19 22:39:34 embed-certs-227235 kubelet[1485]: I1119 22:39:34.167805    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4dj9\" (UniqueName: \"kubernetes.io/projected/dad399ee-80b6-4c16-bed2-296586a544b5-kube-api-access-w4dj9\") pod \"storage-provisioner\" (UID: \"dad399ee-80b6-4c16-bed2-296586a544b5\") " pod="kube-system/storage-provisioner"
	Nov 19 22:39:34 embed-certs-227235 kubelet[1485]: I1119 22:39:34.167833    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsm7j\" (UniqueName: \"kubernetes.io/projected/dae34df3-583b-4539-a4d6-78240466e86c-kube-api-access-xsm7j\") pod \"coredns-66bc5c9577-6xhjj\" (UID: \"dae34df3-583b-4539-a4d6-78240466e86c\") " pod="kube-system/coredns-66bc5c9577-6xhjj"
	Nov 19 22:39:34 embed-certs-227235 kubelet[1485]: I1119 22:39:34.757024    1485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6xhjj" podStartSLOduration=42.756995862 podStartE2EDuration="42.756995862s" podCreationTimestamp="2025-11-19 22:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:39:34.729537809 +0000 UTC m=+47.366012570" watchObservedRunningTime="2025-11-19 22:39:34.756995862 +0000 UTC m=+47.393470631"
	Nov 19 22:39:36 embed-certs-227235 kubelet[1485]: I1119 22:39:36.894216    1485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.894196522 podStartE2EDuration="42.894196522s" podCreationTimestamp="2025-11-19 22:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:39:34.783041437 +0000 UTC m=+47.419516198" watchObservedRunningTime="2025-11-19 22:39:36.894196522 +0000 UTC m=+49.530671291"
	Nov 19 22:39:36 embed-certs-227235 kubelet[1485]: I1119 22:39:36.991077    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwgw5\" (UniqueName: \"kubernetes.io/projected/3a9ffa6e-50c6-4636-a1c1-d3c478e5e486-kube-api-access-zwgw5\") pod \"busybox\" (UID: \"3a9ffa6e-50c6-4636-a1c1-d3c478e5e486\") " pod="default/busybox"
	Nov 19 22:39:39 embed-certs-227235 kubelet[1485]: I1119 22:39:39.751033    1485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.593800571 podStartE2EDuration="3.751009596s" podCreationTimestamp="2025-11-19 22:39:36 +0000 UTC" firstStartedPulling="2025-11-19 22:39:37.35636912 +0000 UTC m=+49.992843881" lastFinishedPulling="2025-11-19 22:39:39.513578145 +0000 UTC m=+52.150052906" observedRunningTime="2025-11-19 22:39:39.750573473 +0000 UTC m=+52.387048234" watchObservedRunningTime="2025-11-19 22:39:39.751009596 +0000 UTC m=+52.387484357"
	
	
	==> storage-provisioner [b65cef45f66bd8982ce2de4bf0bc496f53a8596537e811864c0880d902519606] <==
	I1119 22:39:34.601144       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:39:34.637602       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:39:34.637660       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:39:34.641387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:34.651283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:39:34.651644       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:39:34.654302       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-227235_0da3fa68-c347-45ef-be87-ad82e1b302e4!
	I1119 22:39:34.654375       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"01511746-7309-4f99-ba53-8a779e31347e", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-227235_0da3fa68-c347-45ef-be87-ad82e1b302e4 became leader
	W1119 22:39:34.661438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:34.678515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:39:34.768257       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-227235_0da3fa68-c347-45ef-be87-ad82e1b302e4!
	W1119 22:39:36.691502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:36.699724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:38.703865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:38.708605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:40.711912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:40.717376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:42.720582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:42.726214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:44.732317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:44.741663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:46.745041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:46.750687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-227235 -n embed-certs-227235
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-227235 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-227235
helpers_test.go:243: (dbg) docker inspect embed-certs-227235:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d6f2464a8f7d819c80fdfae3865e85e0bca84b0d24ccb8841a43ca942eef0d65",
	        "Created": "2025-11-19T22:38:14.89237119Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 216317,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:38:14.9613705Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/d6f2464a8f7d819c80fdfae3865e85e0bca84b0d24ccb8841a43ca942eef0d65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d6f2464a8f7d819c80fdfae3865e85e0bca84b0d24ccb8841a43ca942eef0d65/hostname",
	        "HostsPath": "/var/lib/docker/containers/d6f2464a8f7d819c80fdfae3865e85e0bca84b0d24ccb8841a43ca942eef0d65/hosts",
	        "LogPath": "/var/lib/docker/containers/d6f2464a8f7d819c80fdfae3865e85e0bca84b0d24ccb8841a43ca942eef0d65/d6f2464a8f7d819c80fdfae3865e85e0bca84b0d24ccb8841a43ca942eef0d65-json.log",
	        "Name": "/embed-certs-227235",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-227235:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-227235",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d6f2464a8f7d819c80fdfae3865e85e0bca84b0d24ccb8841a43ca942eef0d65",
	                "LowerDir": "/var/lib/docker/overlay2/9e37529bf823f891186cde56bd1b9c72b6dd472ec161c8780f8b79d02781c89f-init/diff:/var/lib/docker/overlay2/b6ebc9601ea0ae08484f263713f3358dd93f7748ebfafbd9155229908dee9606/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9e37529bf823f891186cde56bd1b9c72b6dd472ec161c8780f8b79d02781c89f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9e37529bf823f891186cde56bd1b9c72b6dd472ec161c8780f8b79d02781c89f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9e37529bf823f891186cde56bd1b9c72b6dd472ec161c8780f8b79d02781c89f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-227235",
	                "Source": "/var/lib/docker/volumes/embed-certs-227235/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-227235",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-227235",
	                "name.minikube.sigs.k8s.io": "embed-certs-227235",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "074ae2b20604f5cf109a5529099b7ca8b9d17e4baf842e9cae7062b942888fd1",
	            "SandboxKey": "/var/run/docker/netns/074ae2b20604",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-227235": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:63:0b:50:12:58",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4579dc366f68625047c4bbef84debda5dfb8e27d05811c5f0c328cdac0d52cd1",
	                    "EndpointID": "0dd96d29824891f26029c1ee4d3ea893734d695b8c5801609f3f1d43d926017b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-227235",
	                        "d6f2464a8f7d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-227235 -n embed-certs-227235
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-227235 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-227235 logs -n 25: (1.191574526s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p cilium-156590 sudo crio config                                                                                                                                                                                                                   │ cilium-156590                │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ delete  │ -p cilium-156590                                                                                                                                                                                                                                    │ cilium-156590                │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p cert-expiration-750367 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-750367       │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ ssh     │ force-systemd-env-388402 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-388402     │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ delete  │ -p force-systemd-env-388402                                                                                                                                                                                                                         │ force-systemd-env-388402     │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p cert-options-815306 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-815306          │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ cert-options-815306 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-815306          │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p cert-options-815306 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-815306          │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ delete  │ -p cert-options-815306                                                                                                                                                                                                                              │ cert-options-815306          │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ start   │ -p old-k8s-version-264160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:36 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-264160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:36 UTC │ 19 Nov 25 22:36 UTC │
	│ stop    │ -p old-k8s-version-264160 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:36 UTC │ 19 Nov 25 22:37 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-264160 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:37 UTC │
	│ start   │ -p old-k8s-version-264160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:37 UTC │
	│ image   │ old-k8s-version-264160 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:37 UTC │
	│ pause   │ -p old-k8s-version-264160 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:37 UTC │
	│ start   │ -p cert-expiration-750367 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-750367       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:38 UTC │
	│ unpause │ -p old-k8s-version-264160 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:37 UTC │
	│ delete  │ -p old-k8s-version-264160                                                                                                                                                                                                                           │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:37 UTC │ 19 Nov 25 22:38 UTC │
	│ delete  │ -p old-k8s-version-264160                                                                                                                                                                                                                           │ old-k8s-version-264160       │ jenkins │ v1.37.0 │ 19 Nov 25 22:38 UTC │ 19 Nov 25 22:38 UTC │
	│ start   │ -p default-k8s-diff-port-570856 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-570856 │ jenkins │ v1.37.0 │ 19 Nov 25 22:38 UTC │ 19 Nov 25 22:39 UTC │
	│ delete  │ -p cert-expiration-750367                                                                                                                                                                                                                           │ cert-expiration-750367       │ jenkins │ v1.37.0 │ 19 Nov 25 22:38 UTC │ 19 Nov 25 22:38 UTC │
	│ start   │ -p embed-certs-227235 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-227235           │ jenkins │ v1.37.0 │ 19 Nov 25 22:38 UTC │ 19 Nov 25 22:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-570856 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-570856 │ jenkins │ v1.37.0 │ 19 Nov 25 22:39 UTC │ 19 Nov 25 22:39 UTC │
	│ stop    │ -p default-k8s-diff-port-570856 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-570856 │ jenkins │ v1.37.0 │ 19 Nov 25 22:39 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:38:08
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:38:08.697293  215017 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:38:08.704083  215017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:38:08.704139  215017 out.go:374] Setting ErrFile to fd 2...
	I1119 22:38:08.704160  215017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:38:08.706471  215017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
	I1119 22:38:08.707066  215017 out.go:368] Setting JSON to false
	I1119 22:38:08.712552  215017 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4810,"bootTime":1763587079,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1119 22:38:08.712658  215017 start.go:143] virtualization:  
	I1119 22:38:08.726924  215017 out.go:179] * [embed-certs-227235] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:38:08.730374  215017 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:38:08.730495  215017 notify.go:221] Checking for updates...
	I1119 22:38:08.738314  215017 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:38:08.741839  215017 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:38:08.750729  215017 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube
	I1119 22:38:08.753969  215017 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:38:08.758263  215017 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:38:08.761943  215017 config.go:182] Loaded profile config "default-k8s-diff-port-570856": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:38:08.762046  215017 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:38:08.820199  215017 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:38:08.820314  215017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:38:08.984129  215017 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-19 22:38:08.967483926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:38:08.984262  215017 docker.go:319] overlay module found
	I1119 22:38:08.987717  215017 out.go:179] * Using the docker driver based on user configuration
	I1119 22:38:08.990549  215017 start.go:309] selected driver: docker
	I1119 22:38:08.990571  215017 start.go:930] validating driver "docker" against <nil>
	I1119 22:38:08.990586  215017 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:38:08.991509  215017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:38:09.111798  215017 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-19 22:38:09.089203249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:38:09.111938  215017 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:38:09.112256  215017 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:38:09.116504  215017 out.go:179] * Using Docker driver with root privileges
	I1119 22:38:09.124274  215017 cni.go:84] Creating CNI manager for ""
	I1119 22:38:09.124350  215017 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:38:09.124363  215017 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:38:09.124453  215017 start.go:353] cluster config:
	{Name:embed-certs-227235 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-227235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:38:09.127735  215017 out.go:179] * Starting "embed-certs-227235" primary control-plane node in "embed-certs-227235" cluster
	I1119 22:38:09.130607  215017 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 22:38:09.133523  215017 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:38:09.136391  215017 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:38:09.136441  215017 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1119 22:38:09.136452  215017 cache.go:65] Caching tarball of preloaded images
	I1119 22:38:09.136462  215017 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:38:09.136539  215017 preload.go:238] Found /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1119 22:38:09.136547  215017 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1119 22:38:09.136651  215017 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/config.json ...
	I1119 22:38:09.136675  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/config.json: {Name:mk1b25f2623abcf89d25348624125d2f29b1b611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:09.183694  215017 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:38:09.183719  215017 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:38:09.183733  215017 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:38:09.183759  215017 start.go:360] acquireMachinesLock for embed-certs-227235: {Name:mk510c3d29263bf54ad7e262aba43b0a3739a3e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:38:09.184753  215017 start.go:364] duration metric: took 969.151µs to acquireMachinesLock for "embed-certs-227235"
	I1119 22:38:09.184791  215017 start.go:93] Provisioning new machine with config: &{Name:embed-certs-227235 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-227235 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:38:09.184859  215017 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:38:07.391014  213719 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-570856:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.786525535s)
	I1119 22:38:07.391041  213719 kic.go:203] duration metric: took 4.786659493s to extract preloaded images to volume ...
	W1119 22:38:07.391183  213719 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 22:38:07.391347  213719 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:38:07.481611  213719 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-570856 --name default-k8s-diff-port-570856 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-570856 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-570856 --network default-k8s-diff-port-570856 --ip 192.168.76.2 --volume default-k8s-diff-port-570856:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:38:07.963072  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Running}}
	I1119 22:38:07.992676  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:08.024300  213719 cli_runner.go:164] Run: docker exec default-k8s-diff-port-570856 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:38:08.120309  213719 oci.go:144] the created container "default-k8s-diff-port-570856" has a running status.
	I1119 22:38:08.120344  213719 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa...
	I1119 22:38:09.379092  213719 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:38:09.429394  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:09.452972  213719 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:38:09.452994  213719 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-570856 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:38:09.517582  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:09.543798  213719 machine.go:94] provisionDockerMachine start ...
	I1119 22:38:09.543906  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:09.574203  213719 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:09.574537  213719 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1119 22:38:09.574556  213719 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:38:09.753905  213719 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-570856
	
	I1119 22:38:09.753978  213719 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-570856"
	I1119 22:38:09.754102  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:09.788736  213719 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:09.789069  213719 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1119 22:38:09.789083  213719 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-570856 && echo "default-k8s-diff-port-570856" | sudo tee /etc/hostname
	I1119 22:38:10.027975  213719 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-570856
	
	I1119 22:38:10.028087  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:10.053594  213719 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:10.053941  213719 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1119 22:38:10.053963  213719 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-570856' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-570856/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-570856' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:38:10.228136  213719 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:38:10.228163  213719 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-2347/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-2347/.minikube}
	I1119 22:38:10.228198  213719 ubuntu.go:190] setting up certificates
	I1119 22:38:10.228211  213719 provision.go:84] configureAuth start
	I1119 22:38:10.228271  213719 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-570856
	I1119 22:38:10.260529  213719 provision.go:143] copyHostCerts
	I1119 22:38:10.260589  213719 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem, removing ...
	I1119 22:38:10.260598  213719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem
	I1119 22:38:10.262543  213719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem (1082 bytes)
	I1119 22:38:10.262680  213719 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem, removing ...
	I1119 22:38:10.262696  213719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem
	I1119 22:38:10.262738  213719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem (1123 bytes)
	I1119 22:38:10.262811  213719 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem, removing ...
	I1119 22:38:10.262821  213719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem
	I1119 22:38:10.262848  213719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem (1675 bytes)
	I1119 22:38:10.262912  213719 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-570856 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-570856 localhost minikube]
	I1119 22:38:10.546932  213719 provision.go:177] copyRemoteCerts
	I1119 22:38:10.547006  213719 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:38:10.547053  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:10.566569  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:10.670710  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:38:10.689919  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 22:38:10.709802  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:38:10.729254  213719 provision.go:87] duration metric: took 501.020286ms to configureAuth
	I1119 22:38:10.729341  213719 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:38:10.729558  213719 config.go:182] Loaded profile config "default-k8s-diff-port-570856": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:38:10.729599  213719 machine.go:97] duration metric: took 1.185770725s to provisionDockerMachine
	I1119 22:38:10.729629  213719 client.go:176] duration metric: took 8.893120772s to LocalClient.Create
	I1119 22:38:10.729671  213719 start.go:167] duration metric: took 8.893208625s to libmachine.API.Create "default-k8s-diff-port-570856"
	I1119 22:38:10.729697  213719 start.go:293] postStartSetup for "default-k8s-diff-port-570856" (driver="docker")
	I1119 22:38:10.729723  213719 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:38:10.729835  213719 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:38:10.729907  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:10.749040  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:10.851117  213719 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:38:10.854970  213719 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:38:10.855002  213719 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:38:10.855018  213719 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/addons for local assets ...
	I1119 22:38:10.855073  213719 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/files for local assets ...
	I1119 22:38:10.855157  213719 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem -> 41442.pem in /etc/ssl/certs
	I1119 22:38:10.855262  213719 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:38:10.863647  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:38:10.886722  213719 start.go:296] duration metric: took 156.987573ms for postStartSetup
	I1119 22:38:10.887078  213719 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-570856
	I1119 22:38:10.911718  213719 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/config.json ...
	I1119 22:38:10.911987  213719 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:38:10.912028  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:10.930471  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:11.027896  213719 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:38:11.033540  213719 start.go:128] duration metric: took 9.200775241s to createHost
	I1119 22:38:11.033562  213719 start.go:83] releasing machines lock for "default-k8s-diff-port-570856", held for 9.200980978s
	I1119 22:38:11.033643  213719 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-570856
	I1119 22:38:11.053285  213719 ssh_runner.go:195] Run: cat /version.json
	I1119 22:38:11.053332  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:11.053561  213719 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:38:11.053645  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:11.092834  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:11.096401  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:11.213924  213719 ssh_runner.go:195] Run: systemctl --version
	I1119 22:38:11.315479  213719 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:38:11.320121  213719 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:38:11.320192  213719 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:38:11.356242  213719 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 22:38:11.356267  213719 start.go:496] detecting cgroup driver to use...
	I1119 22:38:11.356302  213719 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 22:38:11.356353  213719 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:38:11.373019  213719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:38:11.387519  213719 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:38:11.387580  213719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:38:11.404728  213719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:38:11.423798  213719 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:38:11.599278  213719 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:38:11.778834  213719 docker.go:234] disabling docker service ...
	I1119 22:38:11.778912  213719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:38:11.811353  213719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:38:11.835015  213719 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:38:11.988384  213719 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:38:12.144244  213719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:38:12.158812  213719 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:38:12.181589  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 22:38:12.191717  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:38:12.200100  213719 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1119 22:38:12.200165  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1119 22:38:12.208392  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:38:12.216869  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:38:12.225624  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:38:12.234125  213719 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:38:12.241943  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:38:12.250703  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:38:12.259235  213719 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:38:12.267694  213719 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:38:12.275336  213719 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:38:12.282663  213719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:12.447019  213719 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:38:12.641085  213719 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:38:12.641164  213719 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:38:12.647323  213719 start.go:564] Will wait 60s for crictl version
	I1119 22:38:12.647400  213719 ssh_runner.go:195] Run: which crictl
	I1119 22:38:12.654067  213719 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:38:12.706495  213719 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:38:12.706598  213719 ssh_runner.go:195] Run: containerd --version
	I1119 22:38:12.728227  213719 ssh_runner.go:195] Run: containerd --version
	I1119 22:38:12.756769  213719 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 22:38:09.188165  215017 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:38:09.188412  215017 start.go:159] libmachine.API.Create for "embed-certs-227235" (driver="docker")
	I1119 22:38:09.188460  215017 client.go:173] LocalClient.Create starting
	I1119 22:38:09.188522  215017 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem
	I1119 22:38:09.188557  215017 main.go:143] libmachine: Decoding PEM data...
	I1119 22:38:09.188575  215017 main.go:143] libmachine: Parsing certificate...
	I1119 22:38:09.188626  215017 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem
	I1119 22:38:09.188645  215017 main.go:143] libmachine: Decoding PEM data...
	I1119 22:38:09.188658  215017 main.go:143] libmachine: Parsing certificate...
	I1119 22:38:09.189025  215017 cli_runner.go:164] Run: docker network inspect embed-certs-227235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:38:09.226353  215017 cli_runner.go:211] docker network inspect embed-certs-227235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:38:09.227297  215017 network_create.go:284] running [docker network inspect embed-certs-227235] to gather additional debugging logs...
	I1119 22:38:09.227404  215017 cli_runner.go:164] Run: docker network inspect embed-certs-227235
	W1119 22:38:09.248961  215017 cli_runner.go:211] docker network inspect embed-certs-227235 returned with exit code 1
	I1119 22:38:09.248988  215017 network_create.go:287] error running [docker network inspect embed-certs-227235]: docker network inspect embed-certs-227235: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-227235 not found
	I1119 22:38:09.249019  215017 network_create.go:289] output of [docker network inspect embed-certs-227235]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-227235 not found
	
	** /stderr **
	I1119 22:38:09.249110  215017 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:38:09.295459  215017 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b0fa93c84379 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:8f:4f:8f:5a:a3} reservation:<nil>}
	I1119 22:38:09.295758  215017 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-141c656f658f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:30:08:ea:1a:b9} reservation:<nil>}
	I1119 22:38:09.296184  215017 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-aae633a5ffae IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:73:d8:2e:30:94} reservation:<nil>}
	I1119 22:38:09.296454  215017 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0f1dbc601a67 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:de:02:5d:17:f2:79} reservation:<nil>}
	I1119 22:38:09.296821  215017 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a30110}
	I1119 22:38:09.296836  215017 network_create.go:124] attempt to create docker network embed-certs-227235 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1119 22:38:09.296890  215017 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-227235 embed-certs-227235
	I1119 22:38:09.389450  215017 network_create.go:108] docker network embed-certs-227235 192.168.85.0/24 created
	I1119 22:38:09.389488  215017 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-227235" container
	I1119 22:38:09.389570  215017 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:38:09.426012  215017 cli_runner.go:164] Run: docker volume create embed-certs-227235 --label name.minikube.sigs.k8s.io=embed-certs-227235 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:38:09.458413  215017 oci.go:103] Successfully created a docker volume embed-certs-227235
	I1119 22:38:09.458493  215017 cli_runner.go:164] Run: docker run --rm --name embed-certs-227235-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-227235 --entrypoint /usr/bin/test -v embed-certs-227235:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:38:10.048314  215017 oci.go:107] Successfully prepared a docker volume embed-certs-227235
	I1119 22:38:10.048380  215017 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:38:10.048394  215017 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 22:38:10.048475  215017 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-227235:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 22:38:12.761129  213719 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-570856 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:38:12.776448  213719 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 22:38:12.782082  213719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:38:12.793881  213719 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-570856 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-570856 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:38:12.794007  213719 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:38:12.794066  213719 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:38:12.828546  213719 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:38:12.828565  213719 containerd.go:534] Images already preloaded, skipping extraction
	I1119 22:38:12.828628  213719 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:38:12.874453  213719 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:38:12.874474  213719 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:38:12.874485  213719 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 containerd true true} ...
	I1119 22:38:12.874575  213719 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-570856 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-570856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:38:12.874636  213719 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:38:12.913225  213719 cni.go:84] Creating CNI manager for ""
	I1119 22:38:12.913245  213719 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:38:12.913259  213719 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:38:12.913282  213719 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-570856 NodeName:default-k8s-diff-port-570856 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:38:12.913398  213719 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-570856"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:38:12.913465  213719 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:38:12.935388  213719 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:38:12.935468  213719 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:38:12.971226  213719 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1119 22:38:13.007966  213719 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:38:13.024911  213719 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2241 bytes)
	I1119 22:38:13.042516  213719 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:38:13.046335  213719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:38:13.059831  213719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:13.191953  213719 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:38:13.211424  213719 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856 for IP: 192.168.76.2
	I1119 22:38:13.211448  213719 certs.go:195] generating shared ca certs ...
	I1119 22:38:13.211464  213719 certs.go:227] acquiring lock for ca certs: {Name:mk76285c445bf14c1e73dedba3201c9181209ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:13.211598  213719 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key
	I1119 22:38:13.211646  213719 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key
	I1119 22:38:13.211656  213719 certs.go:257] generating profile certs ...
	I1119 22:38:13.211720  213719 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.key
	I1119 22:38:13.211738  213719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.crt with IP's: []
	I1119 22:38:13.477759  213719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.crt ...
	I1119 22:38:13.477790  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.crt: {Name:mk4af4f401c57a7635e92da9feef7f2a7cfe3346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:13.477979  213719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.key ...
	I1119 22:38:13.477993  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.key: {Name:mkf947f0bf4e302c69721a8e2f74d4a272d67d4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:13.478093  213719 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key.8301174b
	I1119 22:38:13.478112  213719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt.8301174b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1119 22:38:13.929859  213719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt.8301174b ...
	I1119 22:38:13.929894  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt.8301174b: {Name:mkb8c9d5541b894a86911cf54efc4b7ac6afa1c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:13.930079  213719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key.8301174b ...
	I1119 22:38:13.930094  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key.8301174b: {Name:mk87a24e67d10968973a6f22462b3f5c313a93de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:13.930252  213719 certs.go:382] copying /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt.8301174b -> /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt
	I1119 22:38:13.930347  213719 certs.go:386] copying /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key.8301174b -> /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key
	I1119 22:38:13.930411  213719 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.key
	I1119 22:38:13.930431  213719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.crt with IP's: []
	I1119 22:38:14.332796  213719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.crt ...
	I1119 22:38:14.332825  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.crt: {Name:mkc687d4f88c0016e52dc106cbb67f62cb641716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:14.339910  213719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.key ...
	I1119 22:38:14.339932  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.key: {Name:mk85a94508f4f26fe196530cf3fdf265d53e1f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:14.340150  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem (1338 bytes)
	W1119 22:38:14.340197  213719 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144_empty.pem, impossibly tiny 0 bytes
	I1119 22:38:14.340211  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:38:14.340237  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:38:14.340265  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:38:14.340292  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem (1675 bytes)
	I1119 22:38:14.340340  213719 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:38:14.340962  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:38:14.361559  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1119 22:38:14.382612  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:38:14.402496  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:38:14.420924  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:38:14.441447  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:38:14.460685  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:38:14.479294  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:38:14.497456  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem --> /usr/share/ca-certificates/4144.pem (1338 bytes)
	I1119 22:38:14.516533  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /usr/share/ca-certificates/41442.pem (1708 bytes)
	I1119 22:38:14.535911  213719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:38:14.553295  213719 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:38:14.567201  213719 ssh_runner.go:195] Run: openssl version
	I1119 22:38:14.573427  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4144.pem && ln -fs /usr/share/ca-certificates/4144.pem /etc/ssl/certs/4144.pem"
	I1119 22:38:14.582011  213719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4144.pem
	I1119 22:38:14.585596  213719 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/4144.pem
	I1119 22:38:14.585711  213719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4144.pem
	I1119 22:38:14.626575  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4144.pem /etc/ssl/certs/51391683.0"
	I1119 22:38:14.635818  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41442.pem && ln -fs /usr/share/ca-certificates/41442.pem /etc/ssl/certs/41442.pem"
	I1119 22:38:14.644258  213719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41442.pem
	I1119 22:38:14.648142  213719 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/41442.pem
	I1119 22:38:14.648249  213719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41442.pem
	I1119 22:38:14.689425  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41442.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:38:14.698767  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:38:14.708989  213719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:14.713003  213719 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:14.713064  213719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:14.755515  213719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:38:14.766003  213719 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:38:14.769904  213719 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:38:14.769997  213719 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-570856 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-570856 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:38:14.770068  213719 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:38:14.770172  213719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:38:14.831712  213719 cri.go:89] found id: ""
	I1119 22:38:14.831793  213719 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:38:14.844012  213719 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:38:14.859844  213719 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:38:14.859902  213719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:38:14.875606  213719 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:38:14.875626  213719 kubeadm.go:158] found existing configuration files:
	
	I1119 22:38:14.875678  213719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1119 22:38:14.887366  213719 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:38:14.887426  213719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:38:14.898741  213719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1119 22:38:14.907757  213719 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:38:14.907816  213719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:38:14.915056  213719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1119 22:38:14.925190  213719 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:38:14.925246  213719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:38:14.933043  213719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1119 22:38:14.943964  213719 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:38:14.944080  213719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:38:14.956850  213719 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:38:15.022467  213719 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:38:15.022528  213719 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:38:15.074445  213719 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:38:15.074520  213719 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 22:38:15.074585  213719 kubeadm.go:319] OS: Linux
	I1119 22:38:15.074665  213719 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:38:15.074741  213719 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 22:38:15.074834  213719 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:38:15.074895  213719 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:38:15.074955  213719 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:38:15.075040  213719 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:38:15.075127  213719 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:38:15.075186  213719 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:38:15.075235  213719 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 22:38:15.163382  213719 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:38:15.163500  213719 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:38:15.163599  213719 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:38:15.178538  213719 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:38:15.183821  213719 out.go:252]   - Generating certificates and keys ...
	I1119 22:38:15.183926  213719 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:38:15.184002  213719 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:38:16.331729  213719 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:38:14.780147  215017 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-227235:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.73163045s)
	I1119 22:38:14.780195  215017 kic.go:203] duration metric: took 4.731797196s to extract preloaded images to volume ...
	W1119 22:38:14.780320  215017 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 22:38:14.780432  215017 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:38:14.866741  215017 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-227235 --name embed-certs-227235 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-227235 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-227235 --network embed-certs-227235 --ip 192.168.85.2 --volume embed-certs-227235:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:38:15.242087  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Running}}
	I1119 22:38:15.266134  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:15.289559  215017 cli_runner.go:164] Run: docker exec embed-certs-227235 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:38:15.358592  215017 oci.go:144] the created container "embed-certs-227235" has a running status.
	I1119 22:38:15.358618  215017 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa...
	I1119 22:38:16.151858  215017 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:38:16.174089  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:16.193774  215017 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:38:16.193801  215017 kic_runner.go:114] Args: [docker exec --privileged embed-certs-227235 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:38:16.253392  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:16.274685  215017 machine.go:94] provisionDockerMachine start ...
	I1119 22:38:16.274793  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:16.295933  215017 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:16.296265  215017 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1119 22:38:16.296279  215017 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:38:16.296925  215017 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 22:38:16.648850  213719 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:38:17.027534  213719 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:38:17.535405  213719 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:38:18.457071  213719 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:38:18.457651  213719 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-570856 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:38:18.804201  213719 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:38:18.804516  213719 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-570856 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:38:19.251890  213719 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:38:19.443919  213719 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:38:19.989042  213719 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:38:19.989481  213719 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:38:20.248156  213719 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:38:20.575822  213719 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:38:21.322497  213719 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:38:21.582497  213719 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:38:22.046631  213719 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:38:22.048792  213719 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:38:22.056417  213719 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:38:19.458283  215017 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-227235
	
	I1119 22:38:19.458361  215017 ubuntu.go:182] provisioning hostname "embed-certs-227235"
	I1119 22:38:19.458439  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:19.482663  215017 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:19.482955  215017 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1119 22:38:19.482966  215017 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-227235 && echo "embed-certs-227235" | sudo tee /etc/hostname
	I1119 22:38:19.668227  215017 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-227235
	
	I1119 22:38:19.668364  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:19.696161  215017 main.go:143] libmachine: Using SSH client type: native
	I1119 22:38:19.696518  215017 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1119 22:38:19.696542  215017 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-227235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-227235/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-227235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:38:19.844090  215017 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:38:19.844206  215017 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-2347/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-2347/.minikube}
	I1119 22:38:19.844292  215017 ubuntu.go:190] setting up certificates
	I1119 22:38:19.844349  215017 provision.go:84] configureAuth start
	I1119 22:38:19.844460  215017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-227235
	I1119 22:38:19.871920  215017 provision.go:143] copyHostCerts
	I1119 22:38:19.871992  215017 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem, removing ...
	I1119 22:38:19.872014  215017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem
	I1119 22:38:19.872097  215017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem (1082 bytes)
	I1119 22:38:19.872221  215017 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem, removing ...
	I1119 22:38:19.872227  215017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem
	I1119 22:38:19.872260  215017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem (1123 bytes)
	I1119 22:38:19.872326  215017 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem, removing ...
	I1119 22:38:19.872335  215017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem
	I1119 22:38:19.872358  215017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem (1675 bytes)
	I1119 22:38:19.872412  215017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem org=jenkins.embed-certs-227235 san=[127.0.0.1 192.168.85.2 embed-certs-227235 localhost minikube]
	I1119 22:38:20.323404  215017 provision.go:177] copyRemoteCerts
	I1119 22:38:20.323526  215017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:38:20.323586  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:20.356892  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:20.470993  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:38:20.504362  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1119 22:38:20.524210  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:38:20.544124  215017 provision.go:87] duration metric: took 699.7216ms to configureAuth
	I1119 22:38:20.544197  215017 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:38:20.544412  215017 config.go:182] Loaded profile config "embed-certs-227235": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:38:20.544464  215017 machine.go:97] duration metric: took 4.26975387s to provisionDockerMachine
	I1119 22:38:20.544486  215017 client.go:176] duration metric: took 11.356016876s to LocalClient.Create
	I1119 22:38:20.544525  215017 start.go:167] duration metric: took 11.356113575s to libmachine.API.Create "embed-certs-227235"
	I1119 22:38:20.544554  215017 start.go:293] postStartSetup for "embed-certs-227235" (driver="docker")
	I1119 22:38:20.544591  215017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:38:20.544678  215017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:38:20.544756  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:20.565300  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:20.667067  215017 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:38:20.670916  215017 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:38:20.670945  215017 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:38:20.670955  215017 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/addons for local assets ...
	I1119 22:38:20.671006  215017 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/files for local assets ...
	I1119 22:38:20.671083  215017 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem -> 41442.pem in /etc/ssl/certs
	I1119 22:38:20.671184  215017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:38:20.680266  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:38:20.699713  215017 start.go:296] duration metric: took 155.103351ms for postStartSetup
	I1119 22:38:20.700150  215017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-227235
	I1119 22:38:20.718277  215017 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/config.json ...
	I1119 22:38:20.718546  215017 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:38:20.718585  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:20.738828  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:20.841296  215017 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:38:20.847214  215017 start.go:128] duration metric: took 11.662337268s to createHost
	I1119 22:38:20.847254  215017 start.go:83] releasing machines lock for "embed-certs-227235", held for 11.662472169s
	I1119 22:38:20.847344  215017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-227235
	I1119 22:38:20.867867  215017 ssh_runner.go:195] Run: cat /version.json
	I1119 22:38:20.867920  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:20.868163  215017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:38:20.868220  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:20.898565  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:20.913281  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:21.018482  215017 ssh_runner.go:195] Run: systemctl --version
	I1119 22:38:21.126924  215017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:38:21.133433  215017 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:38:21.133571  215017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:38:21.174802  215017 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 22:38:21.174882  215017 start.go:496] detecting cgroup driver to use...
	I1119 22:38:21.174939  215017 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 22:38:21.175034  215017 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:38:21.196072  215017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:38:21.213194  215017 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:38:21.213331  215017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:38:21.235649  215017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:38:21.258133  215017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:38:21.407367  215017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:38:21.569958  215017 docker.go:234] disabling docker service ...
	I1119 22:38:21.570075  215017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:38:21.595432  215017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:38:21.609975  215017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:38:21.765673  215017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:38:21.920710  215017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:38:21.936161  215017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:38:21.954615  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 22:38:21.964563  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:38:21.973986  215017 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1119 22:38:21.974106  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1119 22:38:21.983607  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:38:21.993186  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:38:22.003994  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:38:22.014801  215017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:38:22.024224  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:38:22.034441  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:38:22.044428  215017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:38:22.055950  215017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:38:22.067426  215017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:38:22.076858  215017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:22.269285  215017 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:38:22.431475  215017 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:38:22.431618  215017 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:38:22.438650  215017 start.go:564] Will wait 60s for crictl version
	I1119 22:38:22.438766  215017 ssh_runner.go:195] Run: which crictl
	I1119 22:38:22.442622  215017 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:38:22.484750  215017 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:38:22.484877  215017 ssh_runner.go:195] Run: containerd --version
	I1119 22:38:22.511742  215017 ssh_runner.go:195] Run: containerd --version
	I1119 22:38:22.537445  215017 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 22:38:22.540815  215017 cli_runner.go:164] Run: docker network inspect embed-certs-227235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:38:22.557518  215017 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 22:38:22.561769  215017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:38:22.577497  215017 kubeadm.go:884] updating cluster {Name:embed-certs-227235 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-227235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:38:22.577609  215017 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:38:22.577676  215017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:38:22.612620  215017 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:38:22.612641  215017 containerd.go:534] Images already preloaded, skipping extraction
	I1119 22:38:22.612700  215017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:38:22.639391  215017 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:38:22.639472  215017 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:38:22.639495  215017 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1119 22:38:22.639629  215017 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-227235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-227235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:38:22.639737  215017 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:38:22.675658  215017 cni.go:84] Creating CNI manager for ""
	I1119 22:38:22.675677  215017 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:38:22.675692  215017 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:38:22.675717  215017 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-227235 NodeName:embed-certs-227235 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:38:22.675829  215017 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-227235"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:38:22.675898  215017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:38:22.685785  215017 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:38:22.685854  215017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:38:22.694496  215017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1119 22:38:22.708805  215017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:38:22.723606  215017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1119 22:38:22.738717  215017 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:38:22.742965  215017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:38:22.753270  215017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:22.906872  215017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:38:22.924949  215017 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235 for IP: 192.168.85.2
	I1119 22:38:22.925022  215017 certs.go:195] generating shared ca certs ...
	I1119 22:38:22.925062  215017 certs.go:227] acquiring lock for ca certs: {Name:mk76285c445bf14c1e73dedba3201c9181209ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:22.925256  215017 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key
	I1119 22:38:22.925342  215017 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key
	I1119 22:38:22.925388  215017 certs.go:257] generating profile certs ...
	I1119 22:38:22.925497  215017 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.key
	I1119 22:38:22.925541  215017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.crt with IP's: []
	I1119 22:38:22.060241  213719 out.go:252]   - Booting up control plane ...
	I1119 22:38:22.060350  213719 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:38:22.060434  213719 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:38:22.060504  213719 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:38:22.079017  213719 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:38:22.079368  213719 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:38:22.087584  213719 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:38:22.087933  213719 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:38:22.087982  213719 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:38:22.256548  213719 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:38:22.256676  213719 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:38:23.257718  213719 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001280368s
	I1119 22:38:23.261499  213719 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:38:23.261885  213719 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1119 22:38:23.262185  213719 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:38:23.262436  213719 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:38:23.993413  215017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.crt ...
	I1119 22:38:23.993490  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.crt: {Name:mk9390e430c2adf83fa83c8b0fc6b544e7c6ac73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:23.993723  215017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.key ...
	I1119 22:38:23.993760  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/client.key: {Name:mkcc129ed7fd3a94daf755b808df5c2ca7b4f55b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:23.993902  215017 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key.9b81cf43
	I1119 22:38:23.993944  215017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt.9b81cf43 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1119 22:38:24.949512  215017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt.9b81cf43 ...
	I1119 22:38:24.949545  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt.9b81cf43: {Name:mk857e8f674694c0bdb694030b2402c50649af7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:24.949819  215017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key.9b81cf43 ...
	I1119 22:38:24.949838  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key.9b81cf43: {Name:mke1e1b8b382f368b842b0b0ebd43fcff825ce2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:24.949968  215017 certs.go:382] copying /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt.9b81cf43 -> /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt
	I1119 22:38:24.950099  215017 certs.go:386] copying /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key.9b81cf43 -> /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key
	I1119 22:38:24.950220  215017 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.key
	I1119 22:38:24.950254  215017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.crt with IP's: []
	I1119 22:38:25.380015  215017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.crt ...
	I1119 22:38:25.380052  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.crt: {Name:mk60463442a2346a7467c65f294d7610875ba798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:25.381096  215017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.key ...
	I1119 22:38:25.381124  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.key: {Name:mkcc9ad63005e92a3409d0552d96d1073c0ab984 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:25.381427  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem (1338 bytes)
	W1119 22:38:25.381505  215017 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144_empty.pem, impossibly tiny 0 bytes
	I1119 22:38:25.381526  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:38:25.381569  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:38:25.381616  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:38:25.381661  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem (1675 bytes)
	I1119 22:38:25.381777  215017 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:38:25.382497  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:38:25.423747  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1119 22:38:25.460637  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:38:25.483373  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:38:25.503061  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 22:38:25.523436  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:38:25.548990  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:38:25.581396  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/embed-certs-227235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:38:25.622314  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem --> /usr/share/ca-certificates/4144.pem (1338 bytes)
	I1119 22:38:25.653452  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /usr/share/ca-certificates/41442.pem (1708 bytes)
	I1119 22:38:25.693769  215017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:38:25.730224  215017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:38:25.757903  215017 ssh_runner.go:195] Run: openssl version
	I1119 22:38:25.770954  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4144.pem && ln -fs /usr/share/ca-certificates/4144.pem /etc/ssl/certs/4144.pem"
	I1119 22:38:25.787344  215017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4144.pem
	I1119 22:38:25.792427  215017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/4144.pem
	I1119 22:38:25.792569  215017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4144.pem
	I1119 22:38:25.854376  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4144.pem /etc/ssl/certs/51391683.0"
	I1119 22:38:25.867349  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41442.pem && ln -fs /usr/share/ca-certificates/41442.pem /etc/ssl/certs/41442.pem"
	I1119 22:38:25.885000  215017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41442.pem
	I1119 22:38:25.895195  215017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/41442.pem
	I1119 22:38:25.895369  215017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41442.pem
	I1119 22:38:25.952771  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41442.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:38:25.969512  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:38:25.988362  215017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:25.994984  215017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:25.995107  215017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:38:26.054751  215017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:38:26.081314  215017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:38:26.089485  215017 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:38:26.089616  215017 kubeadm.go:401] StartCluster: {Name:embed-certs-227235 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-227235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:38:26.089729  215017 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:38:26.089883  215017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:38:26.175081  215017 cri.go:89] found id: ""
	I1119 22:38:26.175273  215017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:38:26.201739  215017 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:38:26.213453  215017 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:38:26.213538  215017 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:38:26.227920  215017 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:38:26.227957  215017 kubeadm.go:158] found existing configuration files:
	
	I1119 22:38:26.228016  215017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 22:38:26.238822  215017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:38:26.238956  215017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:38:26.248847  215017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 22:38:26.259874  215017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:38:26.259981  215017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:38:26.269610  215017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 22:38:26.280662  215017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:38:26.280762  215017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:38:26.291067  215017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 22:38:26.299774  215017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:38:26.299863  215017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:38:26.307272  215017 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:38:26.359370  215017 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:38:26.359879  215017 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:38:26.392070  215017 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:38:26.392176  215017 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 22:38:26.392260  215017 kubeadm.go:319] OS: Linux
	I1119 22:38:26.392332  215017 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:38:26.392404  215017 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 22:38:26.392515  215017 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:38:26.392603  215017 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:38:26.392689  215017 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:38:26.392799  215017 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:38:26.392885  215017 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:38:26.392964  215017 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:38:26.393042  215017 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 22:38:26.488613  215017 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:38:26.488982  215017 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:38:26.489119  215017 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:38:26.506528  215017 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:38:26.511504  215017 out.go:252]   - Generating certificates and keys ...
	I1119 22:38:26.511614  215017 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:38:26.511693  215017 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:38:27.434809  215017 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:38:27.852737  215017 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:38:28.219331  215017 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:38:28.667646  215017 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:38:29.503070  215017 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:38:29.503604  215017 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-227235 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:38:29.941520  215017 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:38:29.942072  215017 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-227235 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:38:30.399611  215017 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:38:30.598854  215017 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:38:31.066766  215017 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:38:31.067322  215017 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:38:31.727030  215017 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:38:33.054496  215017 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:38:33.215756  215017 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:38:33.577706  215017 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:38:33.942194  215017 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:38:33.943308  215017 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:38:33.946457  215017 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:38:33.309225  213719 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 10.04648217s
	I1119 22:38:36.096444  213719 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 12.833517484s
	I1119 22:38:37.264214  213719 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 14.001762391s
	I1119 22:38:37.296022  213719 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:38:37.335127  213719 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:38:37.354913  213719 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:38:37.355423  213719 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-570856 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:38:37.372044  213719 kubeadm.go:319] [bootstrap-token] Using token: r8vw8k.tssokqfhghfm62o1
	I1119 22:38:33.949816  215017 out.go:252]   - Booting up control plane ...
	I1119 22:38:33.949930  215017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:38:33.950028  215017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:38:33.951280  215017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:38:33.979582  215017 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:38:33.979702  215017 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:38:33.992539  215017 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:38:33.992652  215017 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:38:33.992697  215017 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:38:34.209173  215017 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:38:34.209304  215017 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:38:35.710488  215017 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501663596s
	I1119 22:38:35.713801  215017 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:38:35.714133  215017 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1119 22:38:35.714829  215017 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:38:35.715359  215017 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:38:37.374987  213719 out.go:252]   - Configuring RBAC rules ...
	I1119 22:38:37.375116  213719 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:38:37.383216  213719 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:38:37.395526  213719 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:38:37.407816  213719 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:38:37.414859  213719 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:38:37.420042  213719 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:38:37.672205  213719 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:38:38.187591  213719 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:38:38.676130  213719 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:38:38.677635  213719 kubeadm.go:319] 
	I1119 22:38:38.677723  213719 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:38:38.677730  213719 kubeadm.go:319] 
	I1119 22:38:38.677810  213719 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:38:38.677815  213719 kubeadm.go:319] 
	I1119 22:38:38.677841  213719 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:38:38.678403  213719 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:38:38.678471  213719 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:38:38.678477  213719 kubeadm.go:319] 
	I1119 22:38:38.678533  213719 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:38:38.678538  213719 kubeadm.go:319] 
	I1119 22:38:38.678587  213719 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:38:38.678591  213719 kubeadm.go:319] 
	I1119 22:38:38.678645  213719 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:38:38.678746  213719 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:38:38.678817  213719 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:38:38.678822  213719 kubeadm.go:319] 
	I1119 22:38:38.679193  213719 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:38:38.679286  213719 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:38:38.679291  213719 kubeadm.go:319] 
	I1119 22:38:38.679572  213719 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token r8vw8k.tssokqfhghfm62o1 \
	I1119 22:38:38.679686  213719 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f3dc8233c963d7fa33b7a72da6102de3e0dbc1bf6e99b77f8426922389e565f9 \
	I1119 22:38:38.690497  213719 kubeadm.go:319] 	--control-plane 
	I1119 22:38:38.690515  213719 kubeadm.go:319] 
	I1119 22:38:38.690863  213719 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:38:38.690881  213719 kubeadm.go:319] 
	I1119 22:38:38.691192  213719 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token r8vw8k.tssokqfhghfm62o1 \
	I1119 22:38:38.691498  213719 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f3dc8233c963d7fa33b7a72da6102de3e0dbc1bf6e99b77f8426922389e565f9 
	I1119 22:38:38.710307  213719 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 22:38:38.710544  213719 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 22:38:38.710653  213719 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:38:38.710672  213719 cni.go:84] Creating CNI manager for ""
	I1119 22:38:38.710679  213719 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:38:38.713840  213719 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:38:38.716961  213719 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:38:38.736887  213719 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:38:38.736905  213719 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:38:38.789317  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:38:39.400153  213719 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:38:39.400321  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:39.400530  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-570856 minikube.k8s.io/updated_at=2025_11_19T22_38_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=default-k8s-diff-port-570856 minikube.k8s.io/primary=true
	I1119 22:38:39.975271  213719 ops.go:34] apiserver oom_adj: -16
	I1119 22:38:39.975391  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:40.475885  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:40.976254  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:41.475492  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:41.975953  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:42.476216  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:42.976019  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:43.476374  213719 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:43.938571  213719 kubeadm.go:1114] duration metric: took 4.538317084s to wait for elevateKubeSystemPrivileges
	I1119 22:38:43.938601  213719 kubeadm.go:403] duration metric: took 29.168610658s to StartCluster
	I1119 22:38:43.938617  213719 settings.go:142] acquiring lock: {Name:mk5c8f7d46662d574c7e53cf7b09709855a1e14f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:43.938675  213719 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:38:43.939379  213719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/kubeconfig: {Name:mk670f88d9cb1be22f05f7db4ddcfb97af791e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:43.939602  213719 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:38:43.939699  213719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:38:43.939950  213719 config.go:182] Loaded profile config "default-k8s-diff-port-570856": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:38:43.939984  213719 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:38:43.940039  213719 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-570856"
	I1119 22:38:43.940056  213719 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-570856"
	I1119 22:38:43.940077  213719 host.go:66] Checking if "default-k8s-diff-port-570856" exists ...
	I1119 22:38:43.940595  213719 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-570856"
	I1119 22:38:43.940614  213719 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-570856"
	I1119 22:38:43.940913  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:43.941163  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:43.943262  213719 out.go:179] * Verifying Kubernetes components...
	I1119 22:38:43.946436  213719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:43.988827  213719 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:38:43.992407  213719 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:38:43.992429  213719 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:38:43.992505  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:44.003465  213719 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-570856"
	I1119 22:38:44.003510  213719 host.go:66] Checking if "default-k8s-diff-port-570856" exists ...
	I1119 22:38:44.003968  213719 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-570856 --format={{.State.Status}}
	I1119 22:38:44.031387  213719 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:38:44.031407  213719 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:38:44.031480  213719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-570856
	I1119 22:38:44.054335  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:44.071105  213719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/default-k8s-diff-port-570856/id_rsa Username:docker}
	I1119 22:38:44.576022  213719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:38:44.576179  213719 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:38:44.632284  213719 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:38:44.830916  213719 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:38:45.842317  213719 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.266107104s)
	I1119 22:38:45.843122  213719 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-570856" to be "Ready" ...
	I1119 22:38:45.843439  213719 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.267383122s)
	I1119 22:38:45.843467  213719 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 22:38:45.844308  213719 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.21199704s)
	I1119 22:38:46.281571  213719 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.450606827s)
	I1119 22:38:46.284845  213719 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1119 22:38:46.287763  213719 addons.go:515] duration metric: took 2.347755369s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1119 22:38:46.347624  213719 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-570856" context rescaled to 1 replicas
	I1119 22:38:44.428112  215017 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 8.712323929s
	I1119 22:38:45.320373  215017 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.604203465s
	I1119 22:38:46.717967  215017 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.003347835s
	I1119 22:38:46.741715  215017 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:38:46.757144  215017 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:38:46.772462  215017 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:38:46.772924  215017 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-227235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:38:46.785381  215017 kubeadm.go:319] [bootstrap-token] Using token: ocom7o.y2g4phnwe8gpvos5
	I1119 22:38:46.788355  215017 out.go:252]   - Configuring RBAC rules ...
	I1119 22:38:46.788494  215017 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:38:46.793683  215017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:38:46.802650  215017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:38:46.811439  215017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:38:46.816154  215017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:38:46.823297  215017 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:38:47.128653  215017 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:38:47.591010  215017 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:38:48.125064  215017 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:38:48.126191  215017 kubeadm.go:319] 
	I1119 22:38:48.126264  215017 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:38:48.126270  215017 kubeadm.go:319] 
	I1119 22:38:48.126346  215017 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:38:48.126350  215017 kubeadm.go:319] 
	I1119 22:38:48.126376  215017 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:38:48.126445  215017 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:38:48.126502  215017 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:38:48.126506  215017 kubeadm.go:319] 
	I1119 22:38:48.126560  215017 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:38:48.126564  215017 kubeadm.go:319] 
	I1119 22:38:48.126611  215017 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:38:48.126618  215017 kubeadm.go:319] 
	I1119 22:38:48.126669  215017 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:38:48.126743  215017 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:38:48.126818  215017 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:38:48.126826  215017 kubeadm.go:319] 
	I1119 22:38:48.126910  215017 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:38:48.126985  215017 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:38:48.126989  215017 kubeadm.go:319] 
	I1119 22:38:48.127072  215017 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ocom7o.y2g4phnwe8gpvos5 \
	I1119 22:38:48.127175  215017 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f3dc8233c963d7fa33b7a72da6102de3e0dbc1bf6e99b77f8426922389e565f9 \
	I1119 22:38:48.127195  215017 kubeadm.go:319] 	--control-plane 
	I1119 22:38:48.127200  215017 kubeadm.go:319] 
	I1119 22:38:48.127283  215017 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:38:48.127287  215017 kubeadm.go:319] 
	I1119 22:38:48.127368  215017 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ocom7o.y2g4phnwe8gpvos5 \
	I1119 22:38:48.127478  215017 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f3dc8233c963d7fa33b7a72da6102de3e0dbc1bf6e99b77f8426922389e565f9 
	I1119 22:38:48.131460  215017 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 22:38:48.131800  215017 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 22:38:48.131963  215017 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:38:48.132002  215017 cni.go:84] Creating CNI manager for ""
	I1119 22:38:48.132025  215017 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:38:48.135396  215017 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:38:48.138681  215017 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:38:48.143238  215017 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:38:48.143261  215017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:38:48.157842  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:38:48.509463  215017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:38:48.509605  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:48.509695  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-227235 minikube.k8s.io/updated_at=2025_11_19T22_38_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=embed-certs-227235 minikube.k8s.io/primary=true
	I1119 22:38:48.531347  215017 ops.go:34] apiserver oom_adj: -16
	W1119 22:38:47.847437  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:38:50.346251  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	I1119 22:38:48.707714  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:49.208479  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:49.708331  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:50.207957  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:50.708351  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:51.208551  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:51.707874  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:52.208750  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:52.708197  215017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:38:52.870444  215017 kubeadm.go:1114] duration metric: took 4.360885722s to wait for elevateKubeSystemPrivileges
	I1119 22:38:52.870476  215017 kubeadm.go:403] duration metric: took 26.780891514s to StartCluster
	I1119 22:38:52.870495  215017 settings.go:142] acquiring lock: {Name:mk5c8f7d46662d574c7e53cf7b09709855a1e14f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:52.870563  215017 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:38:52.871877  215017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/kubeconfig: {Name:mk670f88d9cb1be22f05f7db4ddcfb97af791e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:38:52.872086  215017 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:38:52.872205  215017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:38:52.872510  215017 config.go:182] Loaded profile config "embed-certs-227235": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:38:52.872559  215017 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:38:52.872623  215017 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-227235"
	I1119 22:38:52.872642  215017 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-227235"
	I1119 22:38:52.872666  215017 host.go:66] Checking if "embed-certs-227235" exists ...
	I1119 22:38:52.873151  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:52.873401  215017 addons.go:70] Setting default-storageclass=true in profile "embed-certs-227235"
	I1119 22:38:52.873423  215017 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-227235"
	I1119 22:38:52.873686  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:52.875844  215017 out.go:179] * Verifying Kubernetes components...
	I1119 22:38:52.879063  215017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:38:52.907006  215017 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:38:52.909996  215017 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:38:52.910022  215017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:38:52.910096  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:52.917662  215017 addons.go:239] Setting addon default-storageclass=true in "embed-certs-227235"
	I1119 22:38:52.917721  215017 host.go:66] Checking if "embed-certs-227235" exists ...
	I1119 22:38:52.918300  215017 cli_runner.go:164] Run: docker container inspect embed-certs-227235 --format={{.State.Status}}
	I1119 22:38:52.944204  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:52.957685  215017 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:38:52.957706  215017 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:38:52.957769  215017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-227235
	I1119 22:38:52.993629  215017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/embed-certs-227235/id_rsa Username:docker}
	I1119 22:38:53.201073  215017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:38:53.201195  215017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:38:53.314355  215017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:38:53.327779  215017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:38:53.841120  215017 node_ready.go:35] waiting up to 6m0s for node "embed-certs-227235" to be "Ready" ...
	I1119 22:38:53.841457  215017 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1119 22:38:54.280299  215017 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1119 22:38:52.346734  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:38:54.347319  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	I1119 22:38:54.283209  215017 addons.go:515] duration metric: took 1.410633606s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1119 22:38:54.349594  215017 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-227235" context rescaled to 1 replicas
	W1119 22:38:55.844628  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:38:58.344650  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:38:56.846106  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:38:58.846730  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:00.846861  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:00.347351  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:02.844246  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:02.847116  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:05.346461  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:04.845042  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:07.345010  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:07.347215  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:09.846094  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:09.345198  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:11.346411  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:11.846299  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:13.846861  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:16.347393  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:13.844623  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:16.344779  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:18.345372  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:18.846715  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:21.346432  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	W1119 22:39:20.347964  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:22.843854  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:23.846693  213719 node_ready.go:57] node "default-k8s-diff-port-570856" has "Ready":"False" status (will retry)
	I1119 22:39:25.847621  213719 node_ready.go:49] node "default-k8s-diff-port-570856" is "Ready"
	I1119 22:39:25.847652  213719 node_ready.go:38] duration metric: took 40.004497931s for node "default-k8s-diff-port-570856" to be "Ready" ...
	I1119 22:39:25.847666  213719 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:39:25.847724  213719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:39:25.874926  213719 api_server.go:72] duration metric: took 41.935286387s to wait for apiserver process to appear ...
	I1119 22:39:25.874949  213719 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:39:25.874968  213719 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1119 22:39:25.885461  213719 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1119 22:39:25.887414  213719 api_server.go:141] control plane version: v1.34.1
	I1119 22:39:25.887438  213719 api_server.go:131] duration metric: took 12.482962ms to wait for apiserver health ...
	I1119 22:39:25.887448  213719 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:39:25.891159  213719 system_pods.go:59] 8 kube-system pods found
	I1119 22:39:25.891193  213719 system_pods.go:61] "coredns-66bc5c9577-4m8f2" [92627362-0048-4b1a-af4e-7f9d8c53a483] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:25.891200  213719 system_pods.go:61] "etcd-default-k8s-diff-port-570856" [10367870-e3a1-47eb-b3c4-aaa86bcd75fb] Running
	I1119 22:39:25.891207  213719 system_pods.go:61] "kindnet-n8jjs" [f07057ba-2012-4291-ba43-a3638f7c8c58] Running
	I1119 22:39:25.891212  213719 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-570856" [1f655ad0-d00d-452c-84c6-91797dbb8246] Running
	I1119 22:39:25.891217  213719 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-570856" [e70d16a0-455c-4f9d-860d-60b21038f6e6] Running
	I1119 22:39:25.891221  213719 system_pods.go:61] "kube-proxy-n4868" [965b5310-35e9-4026-91b4-733b3eef9088] Running
	I1119 22:39:25.891226  213719 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-570856" [82db77c8-08a3-4917-8b17-c73717e426e2] Running
	I1119 22:39:25.891231  213719 system_pods.go:61] "storage-provisioner" [2339c18e-d677-4777-b9a8-1df877bb86be] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:25.891238  213719 system_pods.go:74] duration metric: took 3.784369ms to wait for pod list to return data ...
	I1119 22:39:25.891248  213719 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:39:25.894907  213719 default_sa.go:45] found service account: "default"
	I1119 22:39:25.894971  213719 default_sa.go:55] duration metric: took 3.716182ms for default service account to be created ...
	I1119 22:39:25.894995  213719 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:39:25.898958  213719 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:25.899042  213719 system_pods.go:89] "coredns-66bc5c9577-4m8f2" [92627362-0048-4b1a-af4e-7f9d8c53a483] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:25.899064  213719 system_pods.go:89] "etcd-default-k8s-diff-port-570856" [10367870-e3a1-47eb-b3c4-aaa86bcd75fb] Running
	I1119 22:39:25.899105  213719 system_pods.go:89] "kindnet-n8jjs" [f07057ba-2012-4291-ba43-a3638f7c8c58] Running
	I1119 22:39:25.899128  213719 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-570856" [1f655ad0-d00d-452c-84c6-91797dbb8246] Running
	I1119 22:39:25.899147  213719 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-570856" [e70d16a0-455c-4f9d-860d-60b21038f6e6] Running
	I1119 22:39:25.899170  213719 system_pods.go:89] "kube-proxy-n4868" [965b5310-35e9-4026-91b4-733b3eef9088] Running
	I1119 22:39:25.899190  213719 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-570856" [82db77c8-08a3-4917-8b17-c73717e426e2] Running
	I1119 22:39:25.899259  213719 system_pods.go:89] "storage-provisioner" [2339c18e-d677-4777-b9a8-1df877bb86be] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:25.899299  213719 retry.go:31] will retry after 294.705373ms: missing components: kube-dns
	I1119 22:39:26.198486  213719 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:26.198523  213719 system_pods.go:89] "coredns-66bc5c9577-4m8f2" [92627362-0048-4b1a-af4e-7f9d8c53a483] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:26.198531  213719 system_pods.go:89] "etcd-default-k8s-diff-port-570856" [10367870-e3a1-47eb-b3c4-aaa86bcd75fb] Running
	I1119 22:39:26.198541  213719 system_pods.go:89] "kindnet-n8jjs" [f07057ba-2012-4291-ba43-a3638f7c8c58] Running
	I1119 22:39:26.198546  213719 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-570856" [1f655ad0-d00d-452c-84c6-91797dbb8246] Running
	I1119 22:39:26.198552  213719 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-570856" [e70d16a0-455c-4f9d-860d-60b21038f6e6] Running
	I1119 22:39:26.198556  213719 system_pods.go:89] "kube-proxy-n4868" [965b5310-35e9-4026-91b4-733b3eef9088] Running
	I1119 22:39:26.198561  213719 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-570856" [82db77c8-08a3-4917-8b17-c73717e426e2] Running
	I1119 22:39:26.198566  213719 system_pods.go:89] "storage-provisioner" [2339c18e-d677-4777-b9a8-1df877bb86be] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:26.198584  213719 retry.go:31] will retry after 303.182095ms: missing components: kube-dns
	I1119 22:39:26.506554  213719 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:26.506591  213719 system_pods.go:89] "coredns-66bc5c9577-4m8f2" [92627362-0048-4b1a-af4e-7f9d8c53a483] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:26.506598  213719 system_pods.go:89] "etcd-default-k8s-diff-port-570856" [10367870-e3a1-47eb-b3c4-aaa86bcd75fb] Running
	I1119 22:39:26.506604  213719 system_pods.go:89] "kindnet-n8jjs" [f07057ba-2012-4291-ba43-a3638f7c8c58] Running
	I1119 22:39:26.506608  213719 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-570856" [1f655ad0-d00d-452c-84c6-91797dbb8246] Running
	I1119 22:39:26.506613  213719 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-570856" [e70d16a0-455c-4f9d-860d-60b21038f6e6] Running
	I1119 22:39:26.506618  213719 system_pods.go:89] "kube-proxy-n4868" [965b5310-35e9-4026-91b4-733b3eef9088] Running
	I1119 22:39:26.506622  213719 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-570856" [82db77c8-08a3-4917-8b17-c73717e426e2] Running
	I1119 22:39:26.506627  213719 system_pods.go:89] "storage-provisioner" [2339c18e-d677-4777-b9a8-1df877bb86be] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:26.506647  213719 retry.go:31] will retry after 472.574028ms: missing components: kube-dns
	I1119 22:39:26.984178  213719 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:26.984212  213719 system_pods.go:89] "coredns-66bc5c9577-4m8f2" [92627362-0048-4b1a-af4e-7f9d8c53a483] Running
	I1119 22:39:26.984220  213719 system_pods.go:89] "etcd-default-k8s-diff-port-570856" [10367870-e3a1-47eb-b3c4-aaa86bcd75fb] Running
	I1119 22:39:26.984226  213719 system_pods.go:89] "kindnet-n8jjs" [f07057ba-2012-4291-ba43-a3638f7c8c58] Running
	I1119 22:39:26.984231  213719 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-570856" [1f655ad0-d00d-452c-84c6-91797dbb8246] Running
	I1119 22:39:26.984235  213719 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-570856" [e70d16a0-455c-4f9d-860d-60b21038f6e6] Running
	I1119 22:39:26.984239  213719 system_pods.go:89] "kube-proxy-n4868" [965b5310-35e9-4026-91b4-733b3eef9088] Running
	I1119 22:39:26.984243  213719 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-570856" [82db77c8-08a3-4917-8b17-c73717e426e2] Running
	I1119 22:39:26.984247  213719 system_pods.go:89] "storage-provisioner" [2339c18e-d677-4777-b9a8-1df877bb86be] Running
	I1119 22:39:26.984255  213719 system_pods.go:126] duration metric: took 1.089240935s to wait for k8s-apps to be running ...
	I1119 22:39:26.984269  213719 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:39:26.984329  213719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:39:26.998904  213719 system_svc.go:56] duration metric: took 14.6234ms WaitForService to wait for kubelet
	I1119 22:39:26.998932  213719 kubeadm.go:587] duration metric: took 43.05929861s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:39:26.998953  213719 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:39:27.002787  213719 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:39:27.003037  213719 node_conditions.go:123] node cpu capacity is 2
	I1119 22:39:27.003065  213719 node_conditions.go:105] duration metric: took 4.106062ms to run NodePressure ...
	I1119 22:39:27.003081  213719 start.go:242] waiting for startup goroutines ...
	I1119 22:39:27.003095  213719 start.go:247] waiting for cluster config update ...
	I1119 22:39:27.003112  213719 start.go:256] writing updated cluster config ...
	I1119 22:39:27.003490  213719 ssh_runner.go:195] Run: rm -f paused
	I1119 22:39:27.008294  213719 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:39:27.012665  213719 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4m8f2" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.017786  213719 pod_ready.go:94] pod "coredns-66bc5c9577-4m8f2" is "Ready"
	I1119 22:39:27.017812  213719 pod_ready.go:86] duration metric: took 5.121391ms for pod "coredns-66bc5c9577-4m8f2" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.020648  213719 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.025936  213719 pod_ready.go:94] pod "etcd-default-k8s-diff-port-570856" is "Ready"
	I1119 22:39:27.026011  213719 pod_ready.go:86] duration metric: took 5.321771ms for pod "etcd-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.028977  213719 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.034047  213719 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-570856" is "Ready"
	I1119 22:39:27.034073  213719 pod_ready.go:86] duration metric: took 5.070216ms for pod "kube-apiserver-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.036706  213719 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.413085  213719 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-570856" is "Ready"
	I1119 22:39:27.413111  213719 pod_ready.go:86] duration metric: took 376.376792ms for pod "kube-controller-manager-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:27.613330  213719 pod_ready.go:83] waiting for pod "kube-proxy-n4868" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:28.012703  213719 pod_ready.go:94] pod "kube-proxy-n4868" is "Ready"
	I1119 22:39:28.012745  213719 pod_ready.go:86] duration metric: took 399.33038ms for pod "kube-proxy-n4868" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:28.213996  213719 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:28.613271  213719 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-570856" is "Ready"
	I1119 22:39:28.613305  213719 pod_ready.go:86] duration metric: took 399.283191ms for pod "kube-scheduler-default-k8s-diff-port-570856" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:28.613319  213719 pod_ready.go:40] duration metric: took 1.604992351s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:39:28.668463  213719 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 22:39:28.671810  213719 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-570856" cluster and "default" namespace by default
	W1119 22:39:24.844923  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:26.845154  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:29.344473  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	W1119 22:39:31.844696  215017 node_ready.go:57] node "embed-certs-227235" has "Ready":"False" status (will retry)
	I1119 22:39:34.345023  215017 node_ready.go:49] node "embed-certs-227235" is "Ready"
	I1119 22:39:34.345048  215017 node_ready.go:38] duration metric: took 40.503896306s for node "embed-certs-227235" to be "Ready" ...
	I1119 22:39:34.345063  215017 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:39:34.345119  215017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:39:34.362404  215017 api_server.go:72] duration metric: took 41.490288995s to wait for apiserver process to appear ...
	I1119 22:39:34.362426  215017 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:39:34.362445  215017 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:39:34.390640  215017 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 22:39:34.392448  215017 api_server.go:141] control plane version: v1.34.1
	I1119 22:39:34.392508  215017 api_server.go:131] duration metric: took 30.073646ms to wait for apiserver health ...
	I1119 22:39:34.392532  215017 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:39:34.400782  215017 system_pods.go:59] 8 kube-system pods found
	I1119 22:39:34.400862  215017 system_pods.go:61] "coredns-66bc5c9577-6xhjj" [dae34df3-583b-4539-a4d6-78240466e86c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:34.400885  215017 system_pods.go:61] "etcd-embed-certs-227235" [11a732b8-a65d-4a13-8c9f-69b9193419b9] Running
	I1119 22:39:34.400909  215017 system_pods.go:61] "kindnet-v7ws4" [b8f6ea6e-c156-4ce9-9c71-0057f87a1be5] Running
	I1119 22:39:34.400930  215017 system_pods.go:61] "kube-apiserver-embed-certs-227235" [90d0f81c-a22b-4d9a-b5e3-d3b783b345e8] Running
	I1119 22:39:34.400951  215017 system_pods.go:61] "kube-controller-manager-embed-certs-227235" [86f2943e-80a0-4bfc-8764-a48560ccdad9] Running
	I1119 22:39:34.400973  215017 system_pods.go:61] "kube-proxy-plgtr" [6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4] Running
	I1119 22:39:34.400994  215017 system_pods.go:61] "kube-scheduler-embed-certs-227235" [194cd323-d8f6-4a18-9990-931bff9d0b49] Running
	I1119 22:39:34.401017  215017 system_pods.go:61] "storage-provisioner" [dad399ee-80b6-4c16-bed2-296586a544b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:34.401041  215017 system_pods.go:74] duration metric: took 8.489033ms to wait for pod list to return data ...
	I1119 22:39:34.401063  215017 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:39:34.404927  215017 default_sa.go:45] found service account: "default"
	I1119 22:39:34.404991  215017 default_sa.go:55] duration metric: took 3.906002ms for default service account to be created ...
	I1119 22:39:34.405016  215017 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:39:34.408626  215017 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:34.408709  215017 system_pods.go:89] "coredns-66bc5c9577-6xhjj" [dae34df3-583b-4539-a4d6-78240466e86c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:34.408731  215017 system_pods.go:89] "etcd-embed-certs-227235" [11a732b8-a65d-4a13-8c9f-69b9193419b9] Running
	I1119 22:39:34.408754  215017 system_pods.go:89] "kindnet-v7ws4" [b8f6ea6e-c156-4ce9-9c71-0057f87a1be5] Running
	I1119 22:39:34.408780  215017 system_pods.go:89] "kube-apiserver-embed-certs-227235" [90d0f81c-a22b-4d9a-b5e3-d3b783b345e8] Running
	I1119 22:39:34.408803  215017 system_pods.go:89] "kube-controller-manager-embed-certs-227235" [86f2943e-80a0-4bfc-8764-a48560ccdad9] Running
	I1119 22:39:34.408827  215017 system_pods.go:89] "kube-proxy-plgtr" [6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4] Running
	I1119 22:39:34.408848  215017 system_pods.go:89] "kube-scheduler-embed-certs-227235" [194cd323-d8f6-4a18-9990-931bff9d0b49] Running
	I1119 22:39:34.408881  215017 system_pods.go:89] "storage-provisioner" [dad399ee-80b6-4c16-bed2-296586a544b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:34.408920  215017 retry.go:31] will retry after 270.078819ms: missing components: kube-dns
	I1119 22:39:34.682801  215017 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:34.682906  215017 system_pods.go:89] "coredns-66bc5c9577-6xhjj" [dae34df3-583b-4539-a4d6-78240466e86c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:39:34.682929  215017 system_pods.go:89] "etcd-embed-certs-227235" [11a732b8-a65d-4a13-8c9f-69b9193419b9] Running
	I1119 22:39:34.682965  215017 system_pods.go:89] "kindnet-v7ws4" [b8f6ea6e-c156-4ce9-9c71-0057f87a1be5] Running
	I1119 22:39:34.682988  215017 system_pods.go:89] "kube-apiserver-embed-certs-227235" [90d0f81c-a22b-4d9a-b5e3-d3b783b345e8] Running
	I1119 22:39:34.683010  215017 system_pods.go:89] "kube-controller-manager-embed-certs-227235" [86f2943e-80a0-4bfc-8764-a48560ccdad9] Running
	I1119 22:39:34.683041  215017 system_pods.go:89] "kube-proxy-plgtr" [6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4] Running
	I1119 22:39:34.683064  215017 system_pods.go:89] "kube-scheduler-embed-certs-227235" [194cd323-d8f6-4a18-9990-931bff9d0b49] Running
	I1119 22:39:34.683087  215017 system_pods.go:89] "storage-provisioner" [dad399ee-80b6-4c16-bed2-296586a544b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:39:34.683118  215017 retry.go:31] will retry after 271.259245ms: missing components: kube-dns
	I1119 22:39:34.958505  215017 system_pods.go:86] 8 kube-system pods found
	I1119 22:39:34.958539  215017 system_pods.go:89] "coredns-66bc5c9577-6xhjj" [dae34df3-583b-4539-a4d6-78240466e86c] Running
	I1119 22:39:34.958547  215017 system_pods.go:89] "etcd-embed-certs-227235" [11a732b8-a65d-4a13-8c9f-69b9193419b9] Running
	I1119 22:39:34.958551  215017 system_pods.go:89] "kindnet-v7ws4" [b8f6ea6e-c156-4ce9-9c71-0057f87a1be5] Running
	I1119 22:39:34.958557  215017 system_pods.go:89] "kube-apiserver-embed-certs-227235" [90d0f81c-a22b-4d9a-b5e3-d3b783b345e8] Running
	I1119 22:39:34.958584  215017 system_pods.go:89] "kube-controller-manager-embed-certs-227235" [86f2943e-80a0-4bfc-8764-a48560ccdad9] Running
	I1119 22:39:34.958595  215017 system_pods.go:89] "kube-proxy-plgtr" [6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4] Running
	I1119 22:39:34.958600  215017 system_pods.go:89] "kube-scheduler-embed-certs-227235" [194cd323-d8f6-4a18-9990-931bff9d0b49] Running
	I1119 22:39:34.958603  215017 system_pods.go:89] "storage-provisioner" [dad399ee-80b6-4c16-bed2-296586a544b5] Running
	I1119 22:39:34.958612  215017 system_pods.go:126] duration metric: took 553.576677ms to wait for k8s-apps to be running ...
	I1119 22:39:34.958625  215017 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:39:34.958694  215017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:39:34.972706  215017 system_svc.go:56] duration metric: took 14.071483ms WaitForService to wait for kubelet
	I1119 22:39:34.972778  215017 kubeadm.go:587] duration metric: took 42.100669257s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:39:34.972814  215017 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:39:34.975990  215017 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:39:34.976072  215017 node_conditions.go:123] node cpu capacity is 2
	I1119 22:39:34.976093  215017 node_conditions.go:105] duration metric: took 3.255435ms to run NodePressure ...
	I1119 22:39:34.976107  215017 start.go:242] waiting for startup goroutines ...
	I1119 22:39:34.976115  215017 start.go:247] waiting for cluster config update ...
	I1119 22:39:34.976126  215017 start.go:256] writing updated cluster config ...
	I1119 22:39:34.976427  215017 ssh_runner.go:195] Run: rm -f paused
	I1119 22:39:34.980344  215017 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:39:34.985616  215017 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6xhjj" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:34.991603  215017 pod_ready.go:94] pod "coredns-66bc5c9577-6xhjj" is "Ready"
	I1119 22:39:34.991644  215017 pod_ready.go:86] duration metric: took 5.99596ms for pod "coredns-66bc5c9577-6xhjj" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:34.994018  215017 pod_ready.go:83] waiting for pod "etcd-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.003190  215017 pod_ready.go:94] pod "etcd-embed-certs-227235" is "Ready"
	I1119 22:39:35.003274  215017 pod_ready.go:86] duration metric: took 9.230481ms for pod "etcd-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.007638  215017 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.016450  215017 pod_ready.go:94] pod "kube-apiserver-embed-certs-227235" is "Ready"
	I1119 22:39:35.016480  215017 pod_ready.go:86] duration metric: took 8.80742ms for pod "kube-apiserver-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.019656  215017 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.385673  215017 pod_ready.go:94] pod "kube-controller-manager-embed-certs-227235" is "Ready"
	I1119 22:39:35.385700  215017 pod_ready.go:86] duration metric: took 365.999627ms for pod "kube-controller-manager-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.584880  215017 pod_ready.go:83] waiting for pod "kube-proxy-plgtr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:35.984356  215017 pod_ready.go:94] pod "kube-proxy-plgtr" is "Ready"
	I1119 22:39:35.984391  215017 pod_ready.go:86] duration metric: took 399.485083ms for pod "kube-proxy-plgtr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:36.185075  215017 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:36.585576  215017 pod_ready.go:94] pod "kube-scheduler-embed-certs-227235" is "Ready"
	I1119 22:39:36.585603  215017 pod_ready.go:86] duration metric: took 400.501535ms for pod "kube-scheduler-embed-certs-227235" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:39:36.585617  215017 pod_ready.go:40] duration metric: took 1.605197997s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:39:36.654842  215017 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 22:39:36.659599  215017 out.go:179] * Done! kubectl is now configured to use "embed-certs-227235" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	ee67ba8ea568c       1611cd07b61d5       8 seconds ago        Running             busybox                   0                   37003496753a7       busybox                                      default
	53dad3142c14c       138784d87c9c5       13 seconds ago       Running             coredns                   0                   da0b810921826       coredns-66bc5c9577-6xhjj                     kube-system
	b65cef45f66bd       ba04bb24b9575       13 seconds ago       Running             storage-provisioner       0                   d4a4a6be4ccbf       storage-provisioner                          kube-system
	d66cb2ea01457       b1a8c6f707935       55 seconds ago       Running             kindnet-cni               0                   9ba83fac00fa8       kindnet-v7ws4                                kube-system
	f093ca4eda738       05baa95f5142d       55 seconds ago       Running             kube-proxy                0                   6634710379274       kube-proxy-plgtr                             kube-system
	355a3fbf79821       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   a298ae6b66aee       kube-scheduler-embed-certs-227235            kube-system
	5756cab0342dc       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   60e7c54e7300a       kube-controller-manager-embed-certs-227235   kube-system
	26aa304b0d835       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   03e03cf9b234c       kube-apiserver-embed-certs-227235            kube-system
	7f78bcd34bd8c       a1894772a478e       About a minute ago   Running             etcd                      0                   a534f5312fa74       etcd-embed-certs-227235                      kube-system
	
	
	==> containerd <==
	Nov 19 22:39:34 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:34.512610224Z" level=info msg="CreateContainer within sandbox \"d4a4a6be4ccbf73dae0a89a16acad744a0433db5d274ee085e18a38df0caa61a\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"b65cef45f66bd8982ce2de4bf0bc496f53a8596537e811864c0880d902519606\""
	Nov 19 22:39:34 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:34.516624412Z" level=info msg="StartContainer for \"b65cef45f66bd8982ce2de4bf0bc496f53a8596537e811864c0880d902519606\""
	Nov 19 22:39:34 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:34.518022457Z" level=info msg="connecting to shim b65cef45f66bd8982ce2de4bf0bc496f53a8596537e811864c0880d902519606" address="unix:///run/containerd/s/f6335249eef8c42f057e0e307b557f0522e7dcc6fe2b9dc74a42b63339e2a0fd" protocol=ttrpc version=3
	Nov 19 22:39:34 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:34.518692553Z" level=info msg="Container 53dad3142c14c1936c21c1a8e5a3059691e20e1d12e01d0dc6871f9c2a992e4d: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:39:34 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:34.529829728Z" level=info msg="CreateContainer within sandbox \"da0b8109218267c7348e795db266384102dddf10049f1e1b26dac80079e3fee5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"53dad3142c14c1936c21c1a8e5a3059691e20e1d12e01d0dc6871f9c2a992e4d\""
	Nov 19 22:39:34 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:34.531154214Z" level=info msg="StartContainer for \"53dad3142c14c1936c21c1a8e5a3059691e20e1d12e01d0dc6871f9c2a992e4d\""
	Nov 19 22:39:34 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:34.534560248Z" level=info msg="connecting to shim 53dad3142c14c1936c21c1a8e5a3059691e20e1d12e01d0dc6871f9c2a992e4d" address="unix:///run/containerd/s/76d845c6b272eda3fabf883a03ae9814aedddbcb2879c6108011877e30153dc2" protocol=ttrpc version=3
	Nov 19 22:39:34 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:34.606842371Z" level=info msg="StartContainer for \"b65cef45f66bd8982ce2de4bf0bc496f53a8596537e811864c0880d902519606\" returns successfully"
	Nov 19 22:39:34 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:34.625693483Z" level=info msg="StartContainer for \"53dad3142c14c1936c21c1a8e5a3059691e20e1d12e01d0dc6871f9c2a992e4d\" returns successfully"
	Nov 19 22:39:37 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:37.208049365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3a9ffa6e-50c6-4636-a1c1-d3c478e5e486,Namespace:default,Attempt:0,}"
	Nov 19 22:39:37 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:37.277326065Z" level=info msg="connecting to shim 37003496753a73cc614d0a48480a70362619f7e965a152e0b09fbb96fa4b572e" address="unix:///run/containerd/s/c255262205f9fa82747cd88f1aa052eb98086c264a51b2d1bf145f7bedbb38d9" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 22:39:37 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:37.354746006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3a9ffa6e-50c6-4636-a1c1-d3c478e5e486,Namespace:default,Attempt:0,} returns sandbox id \"37003496753a73cc614d0a48480a70362619f7e965a152e0b09fbb96fa4b572e\""
	Nov 19 22:39:37 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:37.356927140Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.500199757Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.502289362Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937184"
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.505528239Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.510659346Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.511360399Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.154212382s"
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.511404256Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.529692456Z" level=info msg="CreateContainer within sandbox \"37003496753a73cc614d0a48480a70362619f7e965a152e0b09fbb96fa4b572e\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.551671936Z" level=info msg="Container ee67ba8ea568caaa173a4e1e5d983d36261b41514c58a3e37b3fb43863bda3b6: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.562023692Z" level=info msg="CreateContainer within sandbox \"37003496753a73cc614d0a48480a70362619f7e965a152e0b09fbb96fa4b572e\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"ee67ba8ea568caaa173a4e1e5d983d36261b41514c58a3e37b3fb43863bda3b6\""
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.565781927Z" level=info msg="StartContainer for \"ee67ba8ea568caaa173a4e1e5d983d36261b41514c58a3e37b3fb43863bda3b6\""
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.570702069Z" level=info msg="connecting to shim ee67ba8ea568caaa173a4e1e5d983d36261b41514c58a3e37b3fb43863bda3b6" address="unix:///run/containerd/s/c255262205f9fa82747cd88f1aa052eb98086c264a51b2d1bf145f7bedbb38d9" protocol=ttrpc version=3
	Nov 19 22:39:39 embed-certs-227235 containerd[759]: time="2025-11-19T22:39:39.657908228Z" level=info msg="StartContainer for \"ee67ba8ea568caaa173a4e1e5d983d36261b41514c58a3e37b3fb43863bda3b6\" returns successfully"
	
	
	==> coredns [53dad3142c14c1936c21c1a8e5a3059691e20e1d12e01d0dc6871f9c2a992e4d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53782 - 20487 "HINFO IN 6381140115399585633.8959357964783949944. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036564299s
	
	
	==> describe nodes <==
	Name:               embed-certs-227235
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-227235
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=embed-certs-227235
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_38_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:38:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-227235
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:39:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:39:33 +0000   Wed, 19 Nov 2025 22:38:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:39:33 +0000   Wed, 19 Nov 2025 22:38:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:39:33 +0000   Wed, 19 Nov 2025 22:38:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:39:33 +0000   Wed, 19 Nov 2025 22:39:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-227235
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                2b37cee5-570a-4071-b36f-9658bf43ea86
	  Boot ID:                    b3875353-65b3-44b7-ad72-afadd7e2486a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-6xhjj                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-227235                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         65s
	  kube-system                 kindnet-v7ws4                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-227235             250m (12%)    0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-controller-manager-embed-certs-227235    200m (10%)    0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-proxy-plgtr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-227235             100m (5%)     0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node embed-certs-227235 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node embed-certs-227235 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x7 over 73s)  kubelet          Node embed-certs-227235 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  61s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node embed-certs-227235 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node embed-certs-227235 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node embed-certs-227235 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-227235 event: Registered Node embed-certs-227235 in Controller
	  Normal   NodeReady                15s                kubelet          Node embed-certs-227235 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 21:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.032038] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[Nov19 21:18] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034282] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.730183] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.763794] kauditd_printk_skb: 36 callbacks suppressed
	[Nov19 21:50] hrtimer: interrupt took 11278311 ns
	
	
	==> etcd [7f78bcd34bd8cf5f518e7de427ae0c653aa056c63742361f17c18ddc9bef7867] <==
	{"level":"warn","ts":"2025-11-19T22:38:40.164302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.208976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.258229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.327481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.348337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.396280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.453698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.477257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.515340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.562853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.588935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.655765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.697479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.730813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.777119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.810975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.844661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.888734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:40.934346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:41.026263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:41.078031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:41.128603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:41.199541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:41.247891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:38:41.435313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56882","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:39:48 up  1:21,  0 user,  load average: 3.02, 3.44, 2.85
	Linux embed-certs-227235 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d66cb2ea0145794c66d30b2be0902f9b38f2ebe74716d2a2ad609a759721e4ae] <==
	I1119 22:38:53.687785       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:38:53.688070       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 22:38:53.688454       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:38:53.688474       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:38:53.688486       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:38:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:38:53.898518       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:38:53.898539       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:38:53.898549       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:38:53.898685       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 22:39:23.898453       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 22:39:23.898474       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 22:39:23.898521       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 22:39:23.898562       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 22:39:25.499480       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:39:25.499737       1 metrics.go:72] Registering metrics
	I1119 22:39:25.499922       1 controller.go:711] "Syncing nftables rules"
	I1119 22:39:33.903849       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:39:33.903915       1 main.go:301] handling current node
	I1119 22:39:43.898274       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:39:43.898502       1 main.go:301] handling current node
	
	
	==> kube-apiserver [26aa304b0d835bec8feab72e1dec5a663069f487b93f0bc31bc6de599a1474d6] <==
	I1119 22:38:43.349571       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:38:43.385441       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 22:38:43.440406       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:38:43.457455       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:38:43.490037       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:38:43.509338       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:38:43.513909       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:38:43.673518       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:38:43.785068       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:38:43.799372       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:38:46.150556       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:38:46.223603       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:38:46.375325       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:38:46.383969       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1119 22:38:46.385652       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:38:46.395318       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:38:46.475630       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:38:47.560168       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:38:47.589700       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:38:47.601641       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:38:52.287517       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:38:52.298591       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:38:52.330874       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:38:52.579628       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1119 22:39:45.175533       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:43644: use of closed network connection
	
	
	==> kube-controller-manager [5756cab0342dc1679a014cd2d2e99d44d1cffbf30793fae007f64c3e93b0bcbe] <==
	I1119 22:38:51.493209       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 22:38:51.500530       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 22:38:51.500567       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 22:38:51.500582       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 22:38:51.500617       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 22:38:51.504263       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 22:38:51.504588       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:38:51.504720       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 22:38:51.518699       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-227235" podCIDRs=["10.244.0.0/24"]
	I1119 22:38:51.523458       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 22:38:51.523765       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:38:51.523903       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 22:38:51.524564       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 22:38:51.529336       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 22:38:51.530270       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 22:38:51.530800       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 22:38:51.530981       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 22:38:51.531113       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 22:38:51.531363       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 22:38:51.534322       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 22:38:51.531452       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-227235"
	I1119 22:38:51.535267       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 22:38:51.536446       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 22:38:51.536543       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:39:36.542054       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f093ca4eda7387ebeeb9cb96f29d1f576a12fa26db2c80cb49f3ec63e0dd40eb] <==
	I1119 22:38:53.604462       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:38:53.726176       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:38:53.826494       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:38:53.826569       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 22:38:53.826670       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:38:53.925406       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:38:53.925646       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:38:53.931088       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:38:53.931631       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:38:53.931936       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:38:53.933313       1 config.go:200] "Starting service config controller"
	I1119 22:38:53.933469       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:38:53.933564       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:38:53.933643       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:38:53.933743       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:38:53.933803       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:38:53.934603       1 config.go:309] "Starting node config controller"
	I1119 22:38:53.934739       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:38:53.934810       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:38:54.034218       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:38:54.034259       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:38:54.034302       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [355a3fbf7982116247ee00c0e41d1de1cf83a16ecb21b21e955c2526aadd59eb] <==
	I1119 22:38:45.249418       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:38:45.270974       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 22:38:45.271160       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 22:38:45.286301       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:38:45.286428       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1119 22:38:45.301685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:38:45.301971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:38:45.302118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:38:45.302583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:38:45.318021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1119 22:38:45.318642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:38:45.318693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:38:45.318753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:38:45.318805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:38:45.318881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:38:45.320499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:38:45.320794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:38:45.320962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:38:45.321081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:38:45.321131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:38:45.321182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:38:45.321234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:38:45.321341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:38:45.326637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1119 22:38:46.587605       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:38:48 embed-certs-227235 kubelet[1485]: I1119 22:38:48.504829    1485 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 19 22:38:48 embed-certs-227235 kubelet[1485]: I1119 22:38:48.608576    1485 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-embed-certs-227235"
	Nov 19 22:38:48 embed-certs-227235 kubelet[1485]: E1119 22:38:48.624090    1485 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-embed-certs-227235\" already exists" pod="kube-system/kube-scheduler-embed-certs-227235"
	Nov 19 22:38:51 embed-certs-227235 kubelet[1485]: I1119 22:38:51.526197    1485 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 22:38:51 embed-certs-227235 kubelet[1485]: I1119 22:38:51.528099    1485 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 22:38:52 embed-certs-227235 kubelet[1485]: I1119 22:38:52.641908    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbxbb\" (UniqueName: \"kubernetes.io/projected/b8f6ea6e-c156-4ce9-9c71-0057f87a1be5-kube-api-access-bbxbb\") pod \"kindnet-v7ws4\" (UID: \"b8f6ea6e-c156-4ce9-9c71-0057f87a1be5\") " pod="kube-system/kindnet-v7ws4"
	Nov 19 22:38:52 embed-certs-227235 kubelet[1485]: I1119 22:38:52.642431    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4-kube-proxy\") pod \"kube-proxy-plgtr\" (UID: \"6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4\") " pod="kube-system/kube-proxy-plgtr"
	Nov 19 22:38:52 embed-certs-227235 kubelet[1485]: I1119 22:38:52.642521    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4-xtables-lock\") pod \"kube-proxy-plgtr\" (UID: \"6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4\") " pod="kube-system/kube-proxy-plgtr"
	Nov 19 22:38:52 embed-certs-227235 kubelet[1485]: I1119 22:38:52.642598    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4-lib-modules\") pod \"kube-proxy-plgtr\" (UID: \"6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4\") " pod="kube-system/kube-proxy-plgtr"
	Nov 19 22:38:52 embed-certs-227235 kubelet[1485]: I1119 22:38:52.642675    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b8f6ea6e-c156-4ce9-9c71-0057f87a1be5-cni-cfg\") pod \"kindnet-v7ws4\" (UID: \"b8f6ea6e-c156-4ce9-9c71-0057f87a1be5\") " pod="kube-system/kindnet-v7ws4"
	Nov 19 22:38:52 embed-certs-227235 kubelet[1485]: I1119 22:38:52.642746    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8f6ea6e-c156-4ce9-9c71-0057f87a1be5-xtables-lock\") pod \"kindnet-v7ws4\" (UID: \"b8f6ea6e-c156-4ce9-9c71-0057f87a1be5\") " pod="kube-system/kindnet-v7ws4"
	Nov 19 22:38:52 embed-certs-227235 kubelet[1485]: I1119 22:38:52.642814    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8f6ea6e-c156-4ce9-9c71-0057f87a1be5-lib-modules\") pod \"kindnet-v7ws4\" (UID: \"b8f6ea6e-c156-4ce9-9c71-0057f87a1be5\") " pod="kube-system/kindnet-v7ws4"
	Nov 19 22:38:52 embed-certs-227235 kubelet[1485]: I1119 22:38:52.642884    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4lbc\" (UniqueName: \"kubernetes.io/projected/6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4-kube-api-access-h4lbc\") pod \"kube-proxy-plgtr\" (UID: \"6b8f58ae-b8a4-4f7a-915c-640dc99ce1e4\") " pod="kube-system/kube-proxy-plgtr"
	Nov 19 22:38:52 embed-certs-227235 kubelet[1485]: I1119 22:38:52.775317    1485 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 19 22:38:53 embed-certs-227235 kubelet[1485]: I1119 22:38:53.664799    1485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-v7ws4" podStartSLOduration=1.66478017 podStartE2EDuration="1.66478017s" podCreationTimestamp="2025-11-19 22:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:38:53.640982195 +0000 UTC m=+6.277456973" watchObservedRunningTime="2025-11-19 22:38:53.66478017 +0000 UTC m=+6.301254931"
	Nov 19 22:38:53 embed-certs-227235 kubelet[1485]: I1119 22:38:53.664963    1485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-plgtr" podStartSLOduration=1.6649564909999999 podStartE2EDuration="1.664956491s" podCreationTimestamp="2025-11-19 22:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:38:53.664524626 +0000 UTC m=+6.300999387" watchObservedRunningTime="2025-11-19 22:38:53.664956491 +0000 UTC m=+6.301431253"
	Nov 19 22:39:33 embed-certs-227235 kubelet[1485]: I1119 22:39:33.948084    1485 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 22:39:34 embed-certs-227235 kubelet[1485]: I1119 22:39:34.167725    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dad399ee-80b6-4c16-bed2-296586a544b5-tmp\") pod \"storage-provisioner\" (UID: \"dad399ee-80b6-4c16-bed2-296586a544b5\") " pod="kube-system/storage-provisioner"
	Nov 19 22:39:34 embed-certs-227235 kubelet[1485]: I1119 22:39:34.167779    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dae34df3-583b-4539-a4d6-78240466e86c-config-volume\") pod \"coredns-66bc5c9577-6xhjj\" (UID: \"dae34df3-583b-4539-a4d6-78240466e86c\") " pod="kube-system/coredns-66bc5c9577-6xhjj"
	Nov 19 22:39:34 embed-certs-227235 kubelet[1485]: I1119 22:39:34.167805    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4dj9\" (UniqueName: \"kubernetes.io/projected/dad399ee-80b6-4c16-bed2-296586a544b5-kube-api-access-w4dj9\") pod \"storage-provisioner\" (UID: \"dad399ee-80b6-4c16-bed2-296586a544b5\") " pod="kube-system/storage-provisioner"
	Nov 19 22:39:34 embed-certs-227235 kubelet[1485]: I1119 22:39:34.167833    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsm7j\" (UniqueName: \"kubernetes.io/projected/dae34df3-583b-4539-a4d6-78240466e86c-kube-api-access-xsm7j\") pod \"coredns-66bc5c9577-6xhjj\" (UID: \"dae34df3-583b-4539-a4d6-78240466e86c\") " pod="kube-system/coredns-66bc5c9577-6xhjj"
	Nov 19 22:39:34 embed-certs-227235 kubelet[1485]: I1119 22:39:34.757024    1485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6xhjj" podStartSLOduration=42.756995862 podStartE2EDuration="42.756995862s" podCreationTimestamp="2025-11-19 22:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:39:34.729537809 +0000 UTC m=+47.366012570" watchObservedRunningTime="2025-11-19 22:39:34.756995862 +0000 UTC m=+47.393470631"
	Nov 19 22:39:36 embed-certs-227235 kubelet[1485]: I1119 22:39:36.894216    1485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.894196522 podStartE2EDuration="42.894196522s" podCreationTimestamp="2025-11-19 22:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:39:34.783041437 +0000 UTC m=+47.419516198" watchObservedRunningTime="2025-11-19 22:39:36.894196522 +0000 UTC m=+49.530671291"
	Nov 19 22:39:36 embed-certs-227235 kubelet[1485]: I1119 22:39:36.991077    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwgw5\" (UniqueName: \"kubernetes.io/projected/3a9ffa6e-50c6-4636-a1c1-d3c478e5e486-kube-api-access-zwgw5\") pod \"busybox\" (UID: \"3a9ffa6e-50c6-4636-a1c1-d3c478e5e486\") " pod="default/busybox"
	Nov 19 22:39:39 embed-certs-227235 kubelet[1485]: I1119 22:39:39.751033    1485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.593800571 podStartE2EDuration="3.751009596s" podCreationTimestamp="2025-11-19 22:39:36 +0000 UTC" firstStartedPulling="2025-11-19 22:39:37.35636912 +0000 UTC m=+49.992843881" lastFinishedPulling="2025-11-19 22:39:39.513578145 +0000 UTC m=+52.150052906" observedRunningTime="2025-11-19 22:39:39.750573473 +0000 UTC m=+52.387048234" watchObservedRunningTime="2025-11-19 22:39:39.751009596 +0000 UTC m=+52.387484357"
	
	
	==> storage-provisioner [b65cef45f66bd8982ce2de4bf0bc496f53a8596537e811864c0880d902519606] <==
	I1119 22:39:34.601144       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:39:34.637602       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:39:34.637660       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:39:34.641387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:34.651283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:39:34.651644       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:39:34.654302       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-227235_0da3fa68-c347-45ef-be87-ad82e1b302e4!
	I1119 22:39:34.654375       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"01511746-7309-4f99-ba53-8a779e31347e", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-227235_0da3fa68-c347-45ef-be87-ad82e1b302e4 became leader
	W1119 22:39:34.661438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:34.678515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:39:34.768257       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-227235_0da3fa68-c347-45ef-be87-ad82e1b302e4!
	W1119 22:39:36.691502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:36.699724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:38.703865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:38.708605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:40.711912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:40.717376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:42.720582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:42.726214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:44.732317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:44.741663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:46.745041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:46.750687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:48.753980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:39:48.762454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-227235 -n embed-certs-227235
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-227235 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (12.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (14.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-546032 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [17fd3c67-5d08-43d8-88c9-cad71f87f288] Pending
E1119 22:42:24.396902    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [17fd3c67-5d08-43d8-88c9-cad71f87f288] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [17fd3c67-5d08-43d8-88c9-cad71f87f288] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.006141253s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-546032 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-546032
helpers_test.go:243: (dbg) docker inspect no-preload-546032:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5b0cbe10f040fb0ab16c715f414301bd2ae525fa4ea913ecec64d62880b2702b",
	        "Created": "2025-11-19T22:41:08.746470937Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 230046,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:41:08.823161158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/5b0cbe10f040fb0ab16c715f414301bd2ae525fa4ea913ecec64d62880b2702b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5b0cbe10f040fb0ab16c715f414301bd2ae525fa4ea913ecec64d62880b2702b/hostname",
	        "HostsPath": "/var/lib/docker/containers/5b0cbe10f040fb0ab16c715f414301bd2ae525fa4ea913ecec64d62880b2702b/hosts",
	        "LogPath": "/var/lib/docker/containers/5b0cbe10f040fb0ab16c715f414301bd2ae525fa4ea913ecec64d62880b2702b/5b0cbe10f040fb0ab16c715f414301bd2ae525fa4ea913ecec64d62880b2702b-json.log",
	        "Name": "/no-preload-546032",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-546032:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-546032",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5b0cbe10f040fb0ab16c715f414301bd2ae525fa4ea913ecec64d62880b2702b",
	                "LowerDir": "/var/lib/docker/overlay2/1d841c8585429c8f1d31118c3dc242e39657934f28ccc1982fcb0b12edf19a8d-init/diff:/var/lib/docker/overlay2/b6ebc9601ea0ae08484f263713f3358dd93f7748ebfafbd9155229908dee9606/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1d841c8585429c8f1d31118c3dc242e39657934f28ccc1982fcb0b12edf19a8d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1d841c8585429c8f1d31118c3dc242e39657934f28ccc1982fcb0b12edf19a8d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1d841c8585429c8f1d31118c3dc242e39657934f28ccc1982fcb0b12edf19a8d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-546032",
	                "Source": "/var/lib/docker/volumes/no-preload-546032/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-546032",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-546032",
	                "name.minikube.sigs.k8s.io": "no-preload-546032",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b75d9f517bb1c633ebea44ace499ed9447f9ab1045e962fe14ee8c8296fa724",
	            "SandboxKey": "/var/run/docker/netns/9b75d9f517bb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-546032": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:b6:18:d7:fe:29",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "195e3ba18813077b9cf1f8edba491f377c674c0a95616e6e54cf04871c173ac3",
	                    "EndpointID": "e438a178494a65df8afebe830a0c577f453ae1326761f44911cd433af25d3297",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-546032",
	                        "5b0cbe10f040"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-546032 -n no-preload-546032
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-546032 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-546032 logs -n 25: (1.552541424s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-570856 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-570856 │ jenkins │ v1.37.0 │ 19 Nov 25 22:39 UTC │ 19 Nov 25 22:39 UTC │
	│ start   │ -p default-k8s-diff-port-570856 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-570856 │ jenkins │ v1.37.0 │ 19 Nov 25 22:39 UTC │ 19 Nov 25 22:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-227235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-227235           │ jenkins │ v1.37.0 │ 19 Nov 25 22:40 UTC │ 19 Nov 25 22:40 UTC │
	│ start   │ -p embed-certs-227235 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-227235           │ jenkins │ v1.37.0 │ 19 Nov 25 22:40 UTC │ 19 Nov 25 22:41 UTC │
	│ image   │ default-k8s-diff-port-570856 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-570856 │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ pause   │ -p default-k8s-diff-port-570856 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-570856 │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ unpause │ -p default-k8s-diff-port-570856 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-570856 │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ delete  │ -p default-k8s-diff-port-570856                                                                                                                                                                                                                     │ default-k8s-diff-port-570856 │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ delete  │ -p default-k8s-diff-port-570856                                                                                                                                                                                                                     │ default-k8s-diff-port-570856 │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ delete  │ -p disable-driver-mounts-063316                                                                                                                                                                                                                     │ disable-driver-mounts-063316 │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ start   │ -p no-preload-546032 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-546032            │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:42 UTC │
	│ image   │ embed-certs-227235 image list --format=json                                                                                                                                                                                                         │ embed-certs-227235           │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ pause   │ -p embed-certs-227235 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-227235           │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ unpause │ -p embed-certs-227235 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-227235           │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ delete  │ -p embed-certs-227235                                                                                                                                                                                                                               │ embed-certs-227235           │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ delete  │ -p embed-certs-227235                                                                                                                                                                                                                               │ embed-certs-227235           │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ start   │ -p newest-cni-616827 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-616827            │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:42 UTC │
	│ addons  │ enable metrics-server -p newest-cni-616827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-616827            │ jenkins │ v1.37.0 │ 19 Nov 25 22:42 UTC │ 19 Nov 25 22:42 UTC │
	│ stop    │ -p newest-cni-616827 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-616827            │ jenkins │ v1.37.0 │ 19 Nov 25 22:42 UTC │ 19 Nov 25 22:42 UTC │
	│ addons  │ enable dashboard -p newest-cni-616827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-616827            │ jenkins │ v1.37.0 │ 19 Nov 25 22:42 UTC │ 19 Nov 25 22:42 UTC │
	│ start   │ -p newest-cni-616827 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-616827            │ jenkins │ v1.37.0 │ 19 Nov 25 22:42 UTC │ 19 Nov 25 22:42 UTC │
	│ image   │ newest-cni-616827 image list --format=json                                                                                                                                                                                                          │ newest-cni-616827            │ jenkins │ v1.37.0 │ 19 Nov 25 22:42 UTC │ 19 Nov 25 22:42 UTC │
	│ pause   │ -p newest-cni-616827 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-616827            │ jenkins │ v1.37.0 │ 19 Nov 25 22:42 UTC │ 19 Nov 25 22:42 UTC │
	│ unpause │ -p newest-cni-616827 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-616827            │ jenkins │ v1.37.0 │ 19 Nov 25 22:42 UTC │ 19 Nov 25 22:42 UTC │
	│ delete  │ -p newest-cni-616827                                                                                                                                                                                                                                │ newest-cni-616827            │ jenkins │ v1.37.0 │ 19 Nov 25 22:42 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:42:11
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:42:11.359160  237085 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:42:11.359368  237085 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:42:11.359404  237085 out.go:374] Setting ErrFile to fd 2...
	I1119 22:42:11.359425  237085 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:42:11.359883  237085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
	I1119 22:42:11.360782  237085 out.go:368] Setting JSON to false
	I1119 22:42:11.361749  237085 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5052,"bootTime":1763587079,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1119 22:42:11.361862  237085 start.go:143] virtualization:  
	I1119 22:42:11.365250  237085 out.go:179] * [newest-cni-616827] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:42:11.369246  237085 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:42:11.369347  237085 notify.go:221] Checking for updates...
	I1119 22:42:11.375257  237085 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:42:11.378230  237085 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:42:11.381044  237085 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube
	I1119 22:42:11.383847  237085 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:42:11.386803  237085 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:42:11.390382  237085 config.go:182] Loaded profile config "newest-cni-616827": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:42:11.390935  237085 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:42:11.419380  237085 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:42:11.419497  237085 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:42:11.485015  237085 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 22:42:11.475232396 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:42:11.485121  237085 docker.go:319] overlay module found
	I1119 22:42:11.488223  237085 out.go:179] * Using the docker driver based on existing profile
	I1119 22:42:11.491149  237085 start.go:309] selected driver: docker
	I1119 22:42:11.491173  237085 start.go:930] validating driver "docker" against &{Name:newest-cni-616827 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-616827 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:42:11.491293  237085 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:42:11.492094  237085 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:42:11.554101  237085 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 22:42:11.544122556 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:42:11.554507  237085 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 22:42:11.554544  237085 cni.go:84] Creating CNI manager for ""
	I1119 22:42:11.554605  237085 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:42:11.554656  237085 start.go:353] cluster config:
	{Name:newest-cni-616827 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:42:11.557729  237085 out.go:179] * Starting "newest-cni-616827" primary control-plane node in "newest-cni-616827" cluster
	I1119 22:42:11.560600  237085 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 22:42:11.563528  237085 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:42:11.566420  237085 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:42:11.566466  237085 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1119 22:42:11.566480  237085 cache.go:65] Caching tarball of preloaded images
	I1119 22:42:11.566514  237085 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:42:11.566575  237085 preload.go:238] Found /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1119 22:42:11.566585  237085 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1119 22:42:11.566698  237085 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/newest-cni-616827/config.json ...
	I1119 22:42:11.590614  237085 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:42:11.590641  237085 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:42:11.590660  237085 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:42:11.590682  237085 start.go:360] acquireMachinesLock for newest-cni-616827: {Name:mkba6f544d6e73bda135e50bf4548f4edb524089 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:42:11.590766  237085 start.go:364] duration metric: took 61.867µs to acquireMachinesLock for "newest-cni-616827"
	I1119 22:42:11.590790  237085 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:42:11.590796  237085 fix.go:54] fixHost starting: 
	I1119 22:42:11.591072  237085 cli_runner.go:164] Run: docker container inspect newest-cni-616827 --format={{.State.Status}}
	I1119 22:42:11.610017  237085 fix.go:112] recreateIfNeeded on newest-cni-616827: state=Stopped err=<nil>
	W1119 22:42:11.610047  237085 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 22:42:07.833297  229740 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-546032" context rescaled to 1 replicas
	W1119 22:42:09.329062  229740 node_ready.go:57] node "no-preload-546032" has "Ready":"False" status (will retry)
	W1119 22:42:11.827998  229740 node_ready.go:57] node "no-preload-546032" has "Ready":"False" status (will retry)
	I1119 22:42:11.613232  237085 out.go:252] * Restarting existing docker container for "newest-cni-616827" ...
	I1119 22:42:11.613337  237085 cli_runner.go:164] Run: docker start newest-cni-616827
	I1119 22:42:11.892476  237085 cli_runner.go:164] Run: docker container inspect newest-cni-616827 --format={{.State.Status}}
	I1119 22:42:11.924870  237085 kic.go:430] container "newest-cni-616827" state is running.
	I1119 22:42:11.925264  237085 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-616827
	I1119 22:42:11.950196  237085 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/newest-cni-616827/config.json ...
	I1119 22:42:11.950429  237085 machine.go:94] provisionDockerMachine start ...
	I1119 22:42:11.950487  237085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-616827
	I1119 22:42:11.975522  237085 main.go:143] libmachine: Using SSH client type: native
	I1119 22:42:11.976006  237085 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1119 22:42:11.976026  237085 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:42:11.976750  237085 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 22:42:15.150187  237085 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-616827
	
	I1119 22:42:15.150215  237085 ubuntu.go:182] provisioning hostname "newest-cni-616827"
	I1119 22:42:15.150291  237085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-616827
	I1119 22:42:15.170465  237085 main.go:143] libmachine: Using SSH client type: native
	I1119 22:42:15.170791  237085 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1119 22:42:15.170808  237085 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-616827 && echo "newest-cni-616827" | sudo tee /etc/hostname
	I1119 22:42:15.329352  237085 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-616827
	
	I1119 22:42:15.329438  237085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-616827
	I1119 22:42:15.353244  237085 main.go:143] libmachine: Using SSH client type: native
	I1119 22:42:15.353561  237085 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1119 22:42:15.353586  237085 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-616827' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-616827/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-616827' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:42:15.498684  237085 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:42:15.498706  237085 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-2347/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-2347/.minikube}
	I1119 22:42:15.498758  237085 ubuntu.go:190] setting up certificates
	I1119 22:42:15.498768  237085 provision.go:84] configureAuth start
	I1119 22:42:15.498846  237085 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-616827
	I1119 22:42:15.516591  237085 provision.go:143] copyHostCerts
	I1119 22:42:15.516661  237085 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem, removing ...
	I1119 22:42:15.516682  237085 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem
	I1119 22:42:15.516761  237085 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/ca.pem (1082 bytes)
	I1119 22:42:15.516864  237085 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem, removing ...
	I1119 22:42:15.516874  237085 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem
	I1119 22:42:15.516902  237085 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/cert.pem (1123 bytes)
	I1119 22:42:15.516957  237085 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem, removing ...
	I1119 22:42:15.516966  237085 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem
	I1119 22:42:15.516998  237085 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-2347/.minikube/key.pem (1675 bytes)
	I1119 22:42:15.517048  237085 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem org=jenkins.newest-cni-616827 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-616827]
	I1119 22:42:16.195760  237085 provision.go:177] copyRemoteCerts
	I1119 22:42:16.195826  237085 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:42:16.195866  237085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-616827
	I1119 22:42:16.213716  237085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/newest-cni-616827/id_rsa Username:docker}
	I1119 22:42:16.314009  237085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:42:16.336415  237085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	W1119 22:42:13.828650  229740 node_ready.go:57] node "no-preload-546032" has "Ready":"False" status (will retry)
	W1119 22:42:16.328092  229740 node_ready.go:57] node "no-preload-546032" has "Ready":"False" status (will retry)
	I1119 22:42:16.359409  237085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:42:16.377104  237085 provision.go:87] duration metric: took 878.321493ms to configureAuth
	I1119 22:42:16.377129  237085 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:42:16.377338  237085 config.go:182] Loaded profile config "newest-cni-616827": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:42:16.377355  237085 machine.go:97] duration metric: took 4.426917303s to provisionDockerMachine
	I1119 22:42:16.377364  237085 start.go:293] postStartSetup for "newest-cni-616827" (driver="docker")
	I1119 22:42:16.377373  237085 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:42:16.377426  237085 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:42:16.377486  237085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-616827
	I1119 22:42:16.395424  237085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/newest-cni-616827/id_rsa Username:docker}
	I1119 22:42:16.502473  237085 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:42:16.506069  237085 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:42:16.506101  237085 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:42:16.506113  237085 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/addons for local assets ...
	I1119 22:42:16.506197  237085 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-2347/.minikube/files for local assets ...
	I1119 22:42:16.506280  237085 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem -> 41442.pem in /etc/ssl/certs
	I1119 22:42:16.506387  237085 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:42:16.514800  237085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:42:16.534730  237085 start.go:296] duration metric: took 157.35117ms for postStartSetup
	I1119 22:42:16.534808  237085 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:42:16.534847  237085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-616827
	I1119 22:42:16.552787  237085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/newest-cni-616827/id_rsa Username:docker}
	I1119 22:42:16.651858  237085 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:42:16.657357  237085 fix.go:56] duration metric: took 5.066554747s for fixHost
	I1119 22:42:16.657386  237085 start.go:83] releasing machines lock for "newest-cni-616827", held for 5.066607777s
	I1119 22:42:16.657457  237085 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-616827
	I1119 22:42:16.674741  237085 ssh_runner.go:195] Run: cat /version.json
	I1119 22:42:16.674791  237085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-616827
	I1119 22:42:16.674790  237085 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:42:16.674867  237085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-616827
	I1119 22:42:16.697056  237085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/newest-cni-616827/id_rsa Username:docker}
	I1119 22:42:16.704925  237085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/newest-cni-616827/id_rsa Username:docker}
	I1119 22:42:16.914103  237085 ssh_runner.go:195] Run: systemctl --version
	I1119 22:42:16.920689  237085 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:42:16.925024  237085 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:42:16.925100  237085 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:42:16.935006  237085 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 22:42:16.935029  237085 start.go:496] detecting cgroup driver to use...
	I1119 22:42:16.935061  237085 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 22:42:16.935132  237085 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 22:42:16.953580  237085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 22:42:16.967450  237085 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:42:16.967538  237085 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:42:16.983336  237085 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:42:16.996741  237085 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:42:17.115841  237085 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:42:17.231944  237085 docker.go:234] disabling docker service ...
	I1119 22:42:17.232014  237085 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:42:17.248403  237085 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:42:17.262841  237085 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:42:17.380019  237085 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:42:17.495720  237085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:42:17.509986  237085 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:42:17.526659  237085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 22:42:17.537290  237085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 22:42:17.547365  237085 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1119 22:42:17.547438  237085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1119 22:42:17.558517  237085 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:42:17.568124  237085 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 22:42:17.577749  237085 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 22:42:17.587028  237085 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:42:17.596028  237085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 22:42:17.605005  237085 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 22:42:17.614372  237085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 22:42:17.623544  237085 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:42:17.631095  237085 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:42:17.638515  237085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:42:17.755523  237085 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 22:42:17.914713  237085 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 22:42:17.914842  237085 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 22:42:17.919642  237085 start.go:564] Will wait 60s for crictl version
	I1119 22:42:17.919750  237085 ssh_runner.go:195] Run: which crictl
	I1119 22:42:17.923401  237085 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:42:17.958275  237085 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 22:42:17.958354  237085 ssh_runner.go:195] Run: containerd --version
	I1119 22:42:17.979537  237085 ssh_runner.go:195] Run: containerd --version
	I1119 22:42:18.007424  237085 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 22:42:18.010537  237085 cli_runner.go:164] Run: docker network inspect newest-cni-616827 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:42:18.028278  237085 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 22:42:18.032726  237085 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:42:18.049988  237085 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1119 22:42:18.052917  237085 kubeadm.go:884] updating cluster {Name:newest-cni-616827 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:42:18.053098  237085 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:42:18.053180  237085 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:42:18.079495  237085 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:42:18.079524  237085 containerd.go:534] Images already preloaded, skipping extraction
	I1119 22:42:18.079583  237085 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:42:18.107516  237085 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 22:42:18.107538  237085 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:42:18.107545  237085 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1119 22:42:18.107646  237085 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-616827 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:42:18.107714  237085 ssh_runner.go:195] Run: sudo crictl info
	I1119 22:42:18.135662  237085 cni.go:84] Creating CNI manager for ""
	I1119 22:42:18.135688  237085 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:42:18.135709  237085 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 22:42:18.135732  237085 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-616827 NodeName:newest-cni-616827 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:42:18.135854  237085 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-616827"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:42:18.135939  237085 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:42:18.144030  237085 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:42:18.144168  237085 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:42:18.151721  237085 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1119 22:42:18.165547  237085 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:42:18.179309  237085 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1119 22:42:18.192038  237085 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:42:18.195675  237085 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:42:18.205271  237085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:42:18.317483  237085 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:42:18.336814  237085 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/newest-cni-616827 for IP: 192.168.85.2
	I1119 22:42:18.336836  237085 certs.go:195] generating shared ca certs ...
	I1119 22:42:18.336878  237085 certs.go:227] acquiring lock for ca certs: {Name:mk76285c445bf14c1e73dedba3201c9181209ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:42:18.337064  237085 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key
	I1119 22:42:18.337136  237085 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key
	I1119 22:42:18.337151  237085 certs.go:257] generating profile certs ...
	I1119 22:42:18.337274  237085 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/newest-cni-616827/client.key
	I1119 22:42:18.337397  237085 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/newest-cni-616827/apiserver.key.bc71c537
	I1119 22:42:18.337474  237085 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/newest-cni-616827/proxy-client.key
	I1119 22:42:18.337618  237085 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem (1338 bytes)
	W1119 22:42:18.337668  237085 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144_empty.pem, impossibly tiny 0 bytes
	I1119 22:42:18.337685  237085 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:42:18.337711  237085 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:42:18.337763  237085 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:42:18.337798  237085 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/certs/key.pem (1675 bytes)
	I1119 22:42:18.337875  237085 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem (1708 bytes)
	I1119 22:42:18.338609  237085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:42:18.375173  237085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1119 22:42:18.401819  237085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:42:18.424737  237085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:42:18.447872  237085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/newest-cni-616827/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 22:42:18.470473  237085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/newest-cni-616827/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:42:18.497715  237085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/newest-cni-616827/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:42:18.529466  237085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/newest-cni-616827/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:42:18.550054  237085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/ssl/certs/41442.pem --> /usr/share/ca-certificates/41442.pem (1708 bytes)
	I1119 22:42:18.569999  237085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:42:18.592562  237085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-2347/.minikube/certs/4144.pem --> /usr/share/ca-certificates/4144.pem (1338 bytes)
	I1119 22:42:18.613113  237085 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:42:18.626880  237085 ssh_runner.go:195] Run: openssl version
	I1119 22:42:18.635501  237085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41442.pem && ln -fs /usr/share/ca-certificates/41442.pem /etc/ssl/certs/41442.pem"
	I1119 22:42:18.645140  237085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41442.pem
	I1119 22:42:18.652074  237085 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/41442.pem
	I1119 22:42:18.652184  237085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41442.pem
	I1119 22:42:18.695762  237085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41442.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:42:18.704489  237085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:42:18.713596  237085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:42:18.718047  237085 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:42:18.718122  237085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:42:18.759572  237085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:42:18.767895  237085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4144.pem && ln -fs /usr/share/ca-certificates/4144.pem /etc/ssl/certs/4144.pem"
	I1119 22:42:18.776418  237085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4144.pem
	I1119 22:42:18.782282  237085 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/4144.pem
	I1119 22:42:18.782385  237085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4144.pem
	I1119 22:42:18.823761  237085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4144.pem /etc/ssl/certs/51391683.0"
	I1119 22:42:18.833672  237085 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:42:18.837735  237085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:42:18.879699  237085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:42:18.925844  237085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:42:18.973178  237085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:42:19.034104  237085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:42:19.087965  237085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:42:19.158050  237085 kubeadm.go:401] StartCluster: {Name:newest-cni-616827 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:42:19.158210  237085 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 22:42:19.158310  237085 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:42:19.232866  237085 cri.go:89] found id: "e8ccc2abc90edb18ef4beecddd83c753c5608a2d843171c898360dc8e17f69dd"
	I1119 22:42:19.232940  237085 cri.go:89] found id: "ea55dfe6a1e894170daf0d5756847efe439a6c4d909443c1a6d41b3a5478e93d"
	I1119 22:42:19.232959  237085 cri.go:89] found id: "8a4ff20ec3833cc81b8a587cb8b89602ac7c89c1c041ac50e9abbeb85fb81e99"
	I1119 22:42:19.232987  237085 cri.go:89] found id: "97854ebbda398e90b79601a933fe570bae5fa9c4fbf1c04eca3a6848247d2124"
	I1119 22:42:19.233009  237085 cri.go:89] found id: "c2169f6171390f54f6cd2c0dfd259d56521633efa70e7552144966d63f7869e9"
	I1119 22:42:19.233030  237085 cri.go:89] found id: "04a4711677eb2ae27b84ea0ba60279bd9c17db94e81707f5ea53a0f0837aedc4"
	I1119 22:42:19.233049  237085 cri.go:89] found id: ""
	I1119 22:42:19.233127  237085 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1119 22:42:19.274682  237085 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"14ee06ef5d6dc334fa628ecfcd32d2341b4d18fefcdadbad96152151be52dac9","pid":909,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/14ee06ef5d6dc334fa628ecfcd32d2341b4d18fefcdadbad96152151be52dac9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/14ee06ef5d6dc334fa628ecfcd32d2341b4d18fefcdadbad96152151be52dac9/rootfs","created":"2025-11-19T22:42:19.120219781Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"14ee06ef5d6dc334fa628ecfcd32d2341b4d18fefcdadbad96152151be52dac9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-616827_3ebd1608776e49ceda7a6d568920d675","io.kubernetes.cri.sandbox-memor
y":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-616827","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3ebd1608776e49ceda7a6d568920d675"},"owner":"root"},{"ociVersion":"1.2.1","id":"1db2994735e862d904f51a41dae6a8dbaba794dcb84fe7172b982d9faf045998","pid":0,"status":"stopped","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1db2994735e862d904f51a41dae6a8dbaba794dcb84fe7172b982d9faf045998","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1db2994735e862d904f51a41dae6a8dbaba794dcb84fe7172b982d9faf045998/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"1db2994735e862d904f51a41dae6a8dbaba794dcb84fe7172b982d9faf045998","io.kubernetes.cri.sandbox-l
og-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-616827_532ad941c75f20b2ffba9da3f52f4ef6","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-616827","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"532ad941c75f20b2ffba9da3f52f4ef6"},"owner":"root"},{"ociVersion":"1.2.1","id":"507f883c5e58aedff536a5abcc52db738ba05b903ec689d517223b7342a693d4","pid":928,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/507f883c5e58aedff536a5abcc52db738ba05b903ec689d517223b7342a693d4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/507f883c5e58aedff536a5abcc52db738ba05b903ec689d517223b7342a693d4/rootfs","created":"2025-11-19T22:42:19.177334061Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.
sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"507f883c5e58aedff536a5abcc52db738ba05b903ec689d517223b7342a693d4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-616827_5bf39bbcb15a75abe813f5f2a7c6fe01","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-616827","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5bf39bbcb15a75abe813f5f2a7c6fe01"},"owner":"root"},{"ociVersion":"1.2.1","id":"649b4c051a8d872991d11daa48bf9e50e6addf67cedb313eaccc7b6166cf863b","pid":920,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/649b4c051a8d872991d11daa48bf9e50e6addf67cedb313eaccc7b6166cf863b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/649b4c051a8d872991d11daa48bf9e50e6addf67cedb313eaccc7b6166cf863b/rootfs","created":"2025-11-19T22:42:19.168236617Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernet
es.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"649b4c051a8d872991d11daa48bf9e50e6addf67cedb313eaccc7b6166cf863b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-616827_6cc376b4dab5492cf33aa3b7d2e5d14a","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-616827","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"6cc376b4dab5492cf33aa3b7d2e5d14a"},"owner":"root"}]
	I1119 22:42:19.274875  237085 cri.go:126] list returned 4 containers
	I1119 22:42:19.274908  237085 cri.go:129] container: {ID:14ee06ef5d6dc334fa628ecfcd32d2341b4d18fefcdadbad96152151be52dac9 Status:running}
	I1119 22:42:19.274940  237085 cri.go:131] skipping 14ee06ef5d6dc334fa628ecfcd32d2341b4d18fefcdadbad96152151be52dac9 - not in ps
	I1119 22:42:19.274962  237085 cri.go:129] container: {ID:1db2994735e862d904f51a41dae6a8dbaba794dcb84fe7172b982d9faf045998 Status:stopped}
	I1119 22:42:19.274993  237085 cri.go:131] skipping 1db2994735e862d904f51a41dae6a8dbaba794dcb84fe7172b982d9faf045998 - not in ps
	I1119 22:42:19.275027  237085 cri.go:129] container: {ID:507f883c5e58aedff536a5abcc52db738ba05b903ec689d517223b7342a693d4 Status:running}
	I1119 22:42:19.275049  237085 cri.go:131] skipping 507f883c5e58aedff536a5abcc52db738ba05b903ec689d517223b7342a693d4 - not in ps
	I1119 22:42:19.275074  237085 cri.go:129] container: {ID:649b4c051a8d872991d11daa48bf9e50e6addf67cedb313eaccc7b6166cf863b Status:running}
	I1119 22:42:19.275098  237085 cri.go:131] skipping 649b4c051a8d872991d11daa48bf9e50e6addf67cedb313eaccc7b6166cf863b - not in ps
	I1119 22:42:19.275175  237085 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:42:19.286696  237085 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 22:42:19.286756  237085 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 22:42:19.286821  237085 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 22:42:19.298056  237085 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:42:19.298757  237085 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-616827" does not appear in /home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:42:19.299081  237085 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-2347/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-616827" cluster setting kubeconfig missing "newest-cni-616827" context setting]
	I1119 22:42:19.299640  237085 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/kubeconfig: {Name:mk670f88d9cb1be22f05f7db4ddcfb97af791e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:42:19.301405  237085 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 22:42:19.309650  237085 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 22:42:19.309679  237085 kubeadm.go:602] duration metric: took 22.904235ms to restartPrimaryControlPlane
	I1119 22:42:19.309703  237085 kubeadm.go:403] duration metric: took 151.666862ms to StartCluster
	I1119 22:42:19.309718  237085 settings.go:142] acquiring lock: {Name:mk5c8f7d46662d574c7e53cf7b09709855a1e14f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:42:19.309788  237085 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:42:19.310739  237085 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/kubeconfig: {Name:mk670f88d9cb1be22f05f7db4ddcfb97af791e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:42:19.310963  237085 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:42:19.311418  237085 config.go:182] Loaded profile config "newest-cni-616827": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:42:19.311415  237085 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:42:19.311557  237085 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-616827"
	I1119 22:42:19.311589  237085 addons.go:70] Setting metrics-server=true in profile "newest-cni-616827"
	I1119 22:42:19.311595  237085 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-616827"
	W1119 22:42:19.311646  237085 addons.go:248] addon storage-provisioner should already be in state true
	I1119 22:42:19.311569  237085 addons.go:70] Setting dashboard=true in profile "newest-cni-616827"
	I1119 22:42:19.311715  237085 addons.go:239] Setting addon dashboard=true in "newest-cni-616827"
	W1119 22:42:19.311732  237085 addons.go:248] addon dashboard should already be in state true
	I1119 22:42:19.311808  237085 host.go:66] Checking if "newest-cni-616827" exists ...
	I1119 22:42:19.311733  237085 host.go:66] Checking if "newest-cni-616827" exists ...
	I1119 22:42:19.312362  237085 cli_runner.go:164] Run: docker container inspect newest-cni-616827 --format={{.State.Status}}
	I1119 22:42:19.312374  237085 cli_runner.go:164] Run: docker container inspect newest-cni-616827 --format={{.State.Status}}
	I1119 22:42:19.311620  237085 addons.go:239] Setting addon metrics-server=true in "newest-cni-616827"
	W1119 22:42:19.313138  237085 addons.go:248] addon metrics-server should already be in state true
	I1119 22:42:19.313174  237085 host.go:66] Checking if "newest-cni-616827" exists ...
	I1119 22:42:19.313614  237085 cli_runner.go:164] Run: docker container inspect newest-cni-616827 --format={{.State.Status}}
	I1119 22:42:19.311560  237085 addons.go:70] Setting default-storageclass=true in profile "newest-cni-616827"
	I1119 22:42:19.315618  237085 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-616827"
	I1119 22:42:19.317659  237085 cli_runner.go:164] Run: docker container inspect newest-cni-616827 --format={{.State.Status}}
	I1119 22:42:19.319915  237085 out.go:179] * Verifying Kubernetes components...
	I1119 22:42:19.325995  237085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:42:19.357292  237085 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:42:19.360336  237085 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:42:19.360361  237085 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:42:19.360434  237085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-616827
	I1119 22:42:19.389221  237085 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 22:42:19.392888  237085 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 22:42:19.396006  237085 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 22:42:19.396032  237085 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 22:42:19.396106  237085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-616827
	I1119 22:42:19.401294  237085 addons.go:239] Setting addon default-storageclass=true in "newest-cni-616827"
	W1119 22:42:19.401324  237085 addons.go:248] addon default-storageclass should already be in state true
	I1119 22:42:19.401349  237085 host.go:66] Checking if "newest-cni-616827" exists ...
	I1119 22:42:19.401957  237085 cli_runner.go:164] Run: docker container inspect newest-cni-616827 --format={{.State.Status}}
	I1119 22:42:19.406314  237085 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1119 22:42:19.409332  237085 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1119 22:42:19.409357  237085 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1119 22:42:19.409431  237085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-616827
	I1119 22:42:19.448205  237085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/newest-cni-616827/id_rsa Username:docker}
	I1119 22:42:19.474350  237085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/newest-cni-616827/id_rsa Username:docker}
	I1119 22:42:19.480080  237085 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:42:19.480105  237085 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:42:19.480165  237085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-616827
	I1119 22:42:19.497388  237085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/newest-cni-616827/id_rsa Username:docker}
	I1119 22:42:19.526614  237085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/newest-cni-616827/id_rsa Username:docker}
	I1119 22:42:19.716701  237085 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:42:19.785943  237085 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:42:19.786083  237085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:42:19.863783  237085 api_server.go:72] duration metric: took 552.78776ms to wait for apiserver process to appear ...
	I1119 22:42:19.863862  237085 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:42:19.863906  237085 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:42:19.979470  237085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:42:19.998116  237085 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 22:42:19.998282  237085 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 22:42:20.049956  237085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:42:20.068035  237085 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1119 22:42:20.068109  237085 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1119 22:42:20.201939  237085 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 22:42:20.202016  237085 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 22:42:20.234563  237085 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1119 22:42:20.234640  237085 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1119 22:42:20.413350  237085 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 22:42:20.413388  237085 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 22:42:20.452543  237085 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 22:42:20.452582  237085 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1119 22:42:20.587944  237085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 22:42:20.592874  237085 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 22:42:20.592937  237085 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 22:42:20.941429  237085 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 22:42:20.941500  237085 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 22:42:21.029063  237085 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 22:42:21.029134  237085 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 22:42:21.101635  237085 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 22:42:21.101706  237085 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 22:42:21.174630  237085 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 22:42:21.174696  237085 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 22:42:21.217324  237085 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 22:42:21.217393  237085 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 22:42:21.253143  237085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1119 22:42:18.329178  229740 node_ready.go:57] node "no-preload-546032" has "Ready":"False" status (will retry)
	I1119 22:42:20.328211  229740 node_ready.go:49] node "no-preload-546032" is "Ready"
	I1119 22:42:20.328245  229740 node_ready.go:38] duration metric: took 13.003024349s for node "no-preload-546032" to be "Ready" ...
	I1119 22:42:20.328259  229740 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:42:20.328317  229740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:42:20.367939  229740 api_server.go:72] duration metric: took 16.3525511s to wait for apiserver process to appear ...
	I1119 22:42:20.367965  229740 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:42:20.367984  229740 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:42:20.380467  229740 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 22:42:20.381621  229740 api_server.go:141] control plane version: v1.34.1
	I1119 22:42:20.381650  229740 api_server.go:131] duration metric: took 13.676862ms to wait for apiserver health ...
	I1119 22:42:20.381659  229740 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:42:20.385485  229740 system_pods.go:59] 8 kube-system pods found
	I1119 22:42:20.385525  229740 system_pods.go:61] "coredns-66bc5c9577-zfwqs" [4bca950a-75ef-497a-bf1a-3d9afc453781] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:42:20.385533  229740 system_pods.go:61] "etcd-no-preload-546032" [b7847e4c-889b-4cc6-a0a4-30b13870f108] Running
	I1119 22:42:20.385539  229740 system_pods.go:61] "kindnet-7gnnb" [ae0ef6bc-60b3-4ecb-9560-3f1b68b52283] Running
	I1119 22:42:20.385544  229740 system_pods.go:61] "kube-apiserver-no-preload-546032" [a658971f-15f1-4232-b80e-4f2c726ae85a] Running
	I1119 22:42:20.385550  229740 system_pods.go:61] "kube-controller-manager-no-preload-546032" [0bfbe07b-573c-406c-822c-3e5eedac83be] Running
	I1119 22:42:20.385554  229740 system_pods.go:61] "kube-proxy-7jlnv" [7af96874-cf7c-4c21-af70-1a42d5dda694] Running
	I1119 22:42:20.385560  229740 system_pods.go:61] "kube-scheduler-no-preload-546032" [06fd018a-af64-440b-876b-f6c17a3683d3] Running
	I1119 22:42:20.385566  229740 system_pods.go:61] "storage-provisioner" [fe24326d-4109-4df6-84e7-aec86a450201] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:42:20.385572  229740 system_pods.go:74] duration metric: took 3.904326ms to wait for pod list to return data ...
	I1119 22:42:20.385587  229740 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:42:20.394577  229740 default_sa.go:45] found service account: "default"
	I1119 22:42:20.394656  229740 default_sa.go:55] duration metric: took 9.062383ms for default service account to be created ...
	I1119 22:42:20.394683  229740 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:42:20.398482  229740 system_pods.go:86] 8 kube-system pods found
	I1119 22:42:20.398513  229740 system_pods.go:89] "coredns-66bc5c9577-zfwqs" [4bca950a-75ef-497a-bf1a-3d9afc453781] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:42:20.398520  229740 system_pods.go:89] "etcd-no-preload-546032" [b7847e4c-889b-4cc6-a0a4-30b13870f108] Running
	I1119 22:42:20.398526  229740 system_pods.go:89] "kindnet-7gnnb" [ae0ef6bc-60b3-4ecb-9560-3f1b68b52283] Running
	I1119 22:42:20.398530  229740 system_pods.go:89] "kube-apiserver-no-preload-546032" [a658971f-15f1-4232-b80e-4f2c726ae85a] Running
	I1119 22:42:20.398535  229740 system_pods.go:89] "kube-controller-manager-no-preload-546032" [0bfbe07b-573c-406c-822c-3e5eedac83be] Running
	I1119 22:42:20.398539  229740 system_pods.go:89] "kube-proxy-7jlnv" [7af96874-cf7c-4c21-af70-1a42d5dda694] Running
	I1119 22:42:20.398542  229740 system_pods.go:89] "kube-scheduler-no-preload-546032" [06fd018a-af64-440b-876b-f6c17a3683d3] Running
	I1119 22:42:20.398548  229740 system_pods.go:89] "storage-provisioner" [fe24326d-4109-4df6-84e7-aec86a450201] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:42:20.398569  229740 retry.go:31] will retry after 271.556044ms: missing components: kube-dns
	I1119 22:42:20.676804  229740 system_pods.go:86] 8 kube-system pods found
	I1119 22:42:20.676835  229740 system_pods.go:89] "coredns-66bc5c9577-zfwqs" [4bca950a-75ef-497a-bf1a-3d9afc453781] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:42:20.676842  229740 system_pods.go:89] "etcd-no-preload-546032" [b7847e4c-889b-4cc6-a0a4-30b13870f108] Running
	I1119 22:42:20.676848  229740 system_pods.go:89] "kindnet-7gnnb" [ae0ef6bc-60b3-4ecb-9560-3f1b68b52283] Running
	I1119 22:42:20.676852  229740 system_pods.go:89] "kube-apiserver-no-preload-546032" [a658971f-15f1-4232-b80e-4f2c726ae85a] Running
	I1119 22:42:20.676857  229740 system_pods.go:89] "kube-controller-manager-no-preload-546032" [0bfbe07b-573c-406c-822c-3e5eedac83be] Running
	I1119 22:42:20.676861  229740 system_pods.go:89] "kube-proxy-7jlnv" [7af96874-cf7c-4c21-af70-1a42d5dda694] Running
	I1119 22:42:20.676865  229740 system_pods.go:89] "kube-scheduler-no-preload-546032" [06fd018a-af64-440b-876b-f6c17a3683d3] Running
	I1119 22:42:20.676870  229740 system_pods.go:89] "storage-provisioner" [fe24326d-4109-4df6-84e7-aec86a450201] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:42:20.676886  229740 retry.go:31] will retry after 363.279863ms: missing components: kube-dns
	I1119 22:42:21.046408  229740 system_pods.go:86] 8 kube-system pods found
	I1119 22:42:21.046438  229740 system_pods.go:89] "coredns-66bc5c9577-zfwqs" [4bca950a-75ef-497a-bf1a-3d9afc453781] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:42:21.046445  229740 system_pods.go:89] "etcd-no-preload-546032" [b7847e4c-889b-4cc6-a0a4-30b13870f108] Running
	I1119 22:42:21.046451  229740 system_pods.go:89] "kindnet-7gnnb" [ae0ef6bc-60b3-4ecb-9560-3f1b68b52283] Running
	I1119 22:42:21.046455  229740 system_pods.go:89] "kube-apiserver-no-preload-546032" [a658971f-15f1-4232-b80e-4f2c726ae85a] Running
	I1119 22:42:21.046460  229740 system_pods.go:89] "kube-controller-manager-no-preload-546032" [0bfbe07b-573c-406c-822c-3e5eedac83be] Running
	I1119 22:42:21.046464  229740 system_pods.go:89] "kube-proxy-7jlnv" [7af96874-cf7c-4c21-af70-1a42d5dda694] Running
	I1119 22:42:21.046468  229740 system_pods.go:89] "kube-scheduler-no-preload-546032" [06fd018a-af64-440b-876b-f6c17a3683d3] Running
	I1119 22:42:21.046478  229740 system_pods.go:89] "storage-provisioner" [fe24326d-4109-4df6-84e7-aec86a450201] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:42:21.046492  229740 retry.go:31] will retry after 451.094592ms: missing components: kube-dns
	I1119 22:42:21.503548  229740 system_pods.go:86] 8 kube-system pods found
	I1119 22:42:21.503588  229740 system_pods.go:89] "coredns-66bc5c9577-zfwqs" [4bca950a-75ef-497a-bf1a-3d9afc453781] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:42:21.503596  229740 system_pods.go:89] "etcd-no-preload-546032" [b7847e4c-889b-4cc6-a0a4-30b13870f108] Running
	I1119 22:42:21.503603  229740 system_pods.go:89] "kindnet-7gnnb" [ae0ef6bc-60b3-4ecb-9560-3f1b68b52283] Running
	I1119 22:42:21.503608  229740 system_pods.go:89] "kube-apiserver-no-preload-546032" [a658971f-15f1-4232-b80e-4f2c726ae85a] Running
	I1119 22:42:21.503618  229740 system_pods.go:89] "kube-controller-manager-no-preload-546032" [0bfbe07b-573c-406c-822c-3e5eedac83be] Running
	I1119 22:42:21.503622  229740 system_pods.go:89] "kube-proxy-7jlnv" [7af96874-cf7c-4c21-af70-1a42d5dda694] Running
	I1119 22:42:21.503627  229740 system_pods.go:89] "kube-scheduler-no-preload-546032" [06fd018a-af64-440b-876b-f6c17a3683d3] Running
	I1119 22:42:21.503633  229740 system_pods.go:89] "storage-provisioner" [fe24326d-4109-4df6-84e7-aec86a450201] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:42:21.503651  229740 retry.go:31] will retry after 414.306942ms: missing components: kube-dns
	I1119 22:42:21.922538  229740 system_pods.go:86] 8 kube-system pods found
	I1119 22:42:21.922620  229740 system_pods.go:89] "coredns-66bc5c9577-zfwqs" [4bca950a-75ef-497a-bf1a-3d9afc453781] Running
	I1119 22:42:21.922651  229740 system_pods.go:89] "etcd-no-preload-546032" [b7847e4c-889b-4cc6-a0a4-30b13870f108] Running
	I1119 22:42:21.922670  229740 system_pods.go:89] "kindnet-7gnnb" [ae0ef6bc-60b3-4ecb-9560-3f1b68b52283] Running
	I1119 22:42:21.922692  229740 system_pods.go:89] "kube-apiserver-no-preload-546032" [a658971f-15f1-4232-b80e-4f2c726ae85a] Running
	I1119 22:42:21.922724  229740 system_pods.go:89] "kube-controller-manager-no-preload-546032" [0bfbe07b-573c-406c-822c-3e5eedac83be] Running
	I1119 22:42:21.922742  229740 system_pods.go:89] "kube-proxy-7jlnv" [7af96874-cf7c-4c21-af70-1a42d5dda694] Running
	I1119 22:42:21.922762  229740 system_pods.go:89] "kube-scheduler-no-preload-546032" [06fd018a-af64-440b-876b-f6c17a3683d3] Running
	I1119 22:42:21.922807  229740 system_pods.go:89] "storage-provisioner" [fe24326d-4109-4df6-84e7-aec86a450201] Running
	I1119 22:42:21.922830  229740 system_pods.go:126] duration metric: took 1.528126095s to wait for k8s-apps to be running ...
	I1119 22:42:21.922851  229740 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:42:21.922936  229740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:42:21.941670  229740 system_svc.go:56] duration metric: took 18.807685ms WaitForService to wait for kubelet
	I1119 22:42:21.941747  229740 kubeadm.go:587] duration metric: took 17.926362102s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:42:21.941794  229740 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:42:21.949130  229740 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:42:21.949210  229740 node_conditions.go:123] node cpu capacity is 2
	I1119 22:42:21.949239  229740 node_conditions.go:105] duration metric: took 7.411413ms to run NodePressure ...
	I1119 22:42:21.949265  229740 start.go:242] waiting for startup goroutines ...
	I1119 22:42:21.949301  229740 start.go:247] waiting for cluster config update ...
	I1119 22:42:21.949326  229740 start.go:256] writing updated cluster config ...
	I1119 22:42:21.949673  229740 ssh_runner.go:195] Run: rm -f paused
	I1119 22:42:21.954022  229740 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:42:21.957678  229740 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zfwqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:42:21.961845  229740 pod_ready.go:94] pod "coredns-66bc5c9577-zfwqs" is "Ready"
	I1119 22:42:21.961923  229740 pod_ready.go:86] duration metric: took 4.173818ms for pod "coredns-66bc5c9577-zfwqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:42:21.964708  229740 pod_ready.go:83] waiting for pod "etcd-no-preload-546032" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:42:21.975091  229740 pod_ready.go:94] pod "etcd-no-preload-546032" is "Ready"
	I1119 22:42:21.975166  229740 pod_ready.go:86] duration metric: took 10.395927ms for pod "etcd-no-preload-546032" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:42:21.977847  229740 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-546032" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:42:21.984384  229740 pod_ready.go:94] pod "kube-apiserver-no-preload-546032" is "Ready"
	I1119 22:42:21.984456  229740 pod_ready.go:86] duration metric: took 6.544533ms for pod "kube-apiserver-no-preload-546032" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:42:21.986908  229740 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-546032" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:42:22.358418  229740 pod_ready.go:94] pod "kube-controller-manager-no-preload-546032" is "Ready"
	I1119 22:42:22.358444  229740 pod_ready.go:86] duration metric: took 371.471936ms for pod "kube-controller-manager-no-preload-546032" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:42:22.558649  229740 pod_ready.go:83] waiting for pod "kube-proxy-7jlnv" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:42:22.958208  229740 pod_ready.go:94] pod "kube-proxy-7jlnv" is "Ready"
	I1119 22:42:22.958286  229740 pod_ready.go:86] duration metric: took 399.614196ms for pod "kube-proxy-7jlnv" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:42:23.158690  229740 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-546032" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:42:23.558188  229740 pod_ready.go:94] pod "kube-scheduler-no-preload-546032" is "Ready"
	I1119 22:42:23.558267  229740 pod_ready.go:86] duration metric: took 399.500249ms for pod "kube-scheduler-no-preload-546032" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:42:23.558296  229740 pod_ready.go:40] duration metric: took 1.604193291s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:42:23.662832  229740 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 22:42:23.666230  229740 out.go:179] * Done! kubectl is now configured to use "no-preload-546032" cluster and "default" namespace by default
	I1119 22:42:24.866620  237085 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1119 22:42:24.866663  237085 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:42:25.043654  237085 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1119 22:42:25.043681  237085 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1119 22:42:25.364890  237085 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:42:25.388261  237085 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:42:25.388334  237085 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:42:25.864777  237085 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:42:25.893510  237085 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:42:25.893609  237085 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:42:26.108217  237085 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.128660777s)
	I1119 22:42:26.364589  237085 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:42:26.381367  237085 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:42:26.381410  237085 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:42:26.864867  237085 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:42:26.873662  237085 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:42:26.873700  237085 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:42:27.364928  237085 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:42:27.374444  237085 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:42:27.374526  237085 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:42:27.821837  237085 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.771776563s)
	I1119 22:42:27.821927  237085 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.233911143s)
	I1119 22:42:27.821954  237085 addons.go:480] Verifying addon metrics-server=true in "newest-cni-616827"
	I1119 22:42:27.822052  237085 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.568813623s)
	I1119 22:42:27.825290  237085 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-616827 addons enable metrics-server
	
	I1119 22:42:27.828229  237085 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1119 22:42:27.831237  237085 addons.go:515] duration metric: took 8.519818176s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1119 22:42:27.864383  237085 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:42:27.872519  237085 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:42:27.872549  237085 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:42:28.364774  237085 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:42:28.373339  237085 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:42:28.373365  237085 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:42:28.864805  237085 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:42:28.872937  237085 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 22:42:28.874054  237085 api_server.go:141] control plane version: v1.34.1
	I1119 22:42:28.874082  237085 api_server.go:131] duration metric: took 9.010190044s to wait for apiserver health ...
	I1119 22:42:28.874092  237085 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:42:28.877482  237085 system_pods.go:59] 9 kube-system pods found
	I1119 22:42:28.877526  237085 system_pods.go:61] "coredns-66bc5c9577-pzr59" [733bdf88-8909-423d-bc3c-1b7388066377] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 22:42:28.877537  237085 system_pods.go:61] "etcd-newest-cni-616827" [84b9ec1a-3f6e-4080-b9ab-fd18e5d89d34] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:42:28.877544  237085 system_pods.go:61] "kindnet-kwmxw" [a7623242-9961-4987-bcc9-5d0ccbc842bd] Running
	I1119 22:42:28.877551  237085 system_pods.go:61] "kube-apiserver-newest-cni-616827" [ea3bb869-2723-43dd-883a-782b7fdb0669] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:42:28.877562  237085 system_pods.go:61] "kube-controller-manager-newest-cni-616827" [57027fe4-e4a2-4ef1-88d1-998886f36675] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 22:42:28.877570  237085 system_pods.go:61] "kube-proxy-7tvth" [519228c6-e260-415c-a8be-63a235e11fc0] Running
	I1119 22:42:28.877578  237085 system_pods.go:61] "kube-scheduler-newest-cni-616827" [d416d725-a8a2-4f6b-a708-e1ad0bed093b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:42:28.877588  237085 system_pods.go:61] "metrics-server-746fcd58dc-7bpkg" [40302a26-3643-4787-8a48-d1b9200de992] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 22:42:28.877595  237085 system_pods.go:61] "storage-provisioner" [9f010f91-e8d5-4ea4-a18e-48b1a114382b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 22:42:28.877601  237085 system_pods.go:74] duration metric: took 3.503715ms to wait for pod list to return data ...
	I1119 22:42:28.877614  237085 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:42:28.880360  237085 default_sa.go:45] found service account: "default"
	I1119 22:42:28.880388  237085 default_sa.go:55] duration metric: took 2.767338ms for default service account to be created ...
	I1119 22:42:28.880400  237085 kubeadm.go:587] duration metric: took 9.569412706s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 22:42:28.880418  237085 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:42:28.882929  237085 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:42:28.882961  237085 node_conditions.go:123] node cpu capacity is 2
	I1119 22:42:28.882973  237085 node_conditions.go:105] duration metric: took 2.54528ms to run NodePressure ...
	I1119 22:42:28.883000  237085 start.go:242] waiting for startup goroutines ...
	I1119 22:42:28.883009  237085 start.go:247] waiting for cluster config update ...
	I1119 22:42:28.883030  237085 start.go:256] writing updated cluster config ...
	I1119 22:42:28.883355  237085 ssh_runner.go:195] Run: rm -f paused
	I1119 22:42:28.943299  237085 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 22:42:28.950743  237085 out.go:179] * Done! kubectl is now configured to use "newest-cni-616827" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	2f0ecfa11597e       1611cd07b61d5       7 seconds ago       Running             busybox                   0                   00092e3e8697c       busybox                                     default
	305896e4ff26a       138784d87c9c5       13 seconds ago      Running             coredns                   0                   ed89407fa05c0       coredns-66bc5c9577-zfwqs                    kube-system
	3adda05c9140e       66749159455b3       13 seconds ago      Running             storage-provisioner       0                   ed7deef8a9d86       storage-provisioner                         kube-system
	1817f9ef386ff       b1a8c6f707935       25 seconds ago      Running             kindnet-cni               0                   569fc7b5bcfde       kindnet-7gnnb                               kube-system
	366ecd30b59e8       05baa95f5142d       28 seconds ago      Running             kube-proxy                0                   96ee72b63e11b       kube-proxy-7jlnv                            kube-system
	7a6b0a4bef8a8       7eb2c6ff0c5a7       46 seconds ago      Running             kube-controller-manager   0                   3fe009683e8d1       kube-controller-manager-no-preload-546032   kube-system
	53d89767af4c5       b5f57ec6b9867       46 seconds ago      Running             kube-scheduler            0                   766bf29276159       kube-scheduler-no-preload-546032            kube-system
	86fabecca864d       43911e833d64d       46 seconds ago      Running             kube-apiserver            0                   4d44fcc92ea38       kube-apiserver-no-preload-546032            kube-system
	963d0b451828b       a1894772a478e       46 seconds ago      Running             etcd                      0                   9aeaaf9a415e7       etcd-no-preload-546032                      kube-system
	
	
	==> containerd <==
	Nov 19 22:42:20 no-preload-546032 containerd[759]: time="2025-11-19T22:42:20.625312079Z" level=info msg="connecting to shim 3adda05c9140e135ffb5799828979aaf47110911602cd44b2e1fac6b165378f9" address="unix:///run/containerd/s/dfb78c2a7db66c9410c7a609a5d91d12363e689b0b26198b0734b0efa33591d9" protocol=ttrpc version=3
	Nov 19 22:42:20 no-preload-546032 containerd[759]: time="2025-11-19T22:42:20.762718910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zfwqs,Uid:4bca950a-75ef-497a-bf1a-3d9afc453781,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed89407fa05c01b926d57532886c4509f1c092efc35108c99827357bff644b3f\""
	Nov 19 22:42:20 no-preload-546032 containerd[759]: time="2025-11-19T22:42:20.781969513Z" level=info msg="CreateContainer within sandbox \"ed89407fa05c01b926d57532886c4509f1c092efc35108c99827357bff644b3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 19 22:42:20 no-preload-546032 containerd[759]: time="2025-11-19T22:42:20.840508849Z" level=info msg="StartContainer for \"3adda05c9140e135ffb5799828979aaf47110911602cd44b2e1fac6b165378f9\" returns successfully"
	Nov 19 22:42:20 no-preload-546032 containerd[759]: time="2025-11-19T22:42:20.853960321Z" level=info msg="Container 305896e4ff26ad59c08b7cbe7b7fdd76e64eef2244c3d316e8f62eadf2acd607: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:42:20 no-preload-546032 containerd[759]: time="2025-11-19T22:42:20.871154542Z" level=info msg="CreateContainer within sandbox \"ed89407fa05c01b926d57532886c4509f1c092efc35108c99827357bff644b3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"305896e4ff26ad59c08b7cbe7b7fdd76e64eef2244c3d316e8f62eadf2acd607\""
	Nov 19 22:42:20 no-preload-546032 containerd[759]: time="2025-11-19T22:42:20.874353114Z" level=info msg="StartContainer for \"305896e4ff26ad59c08b7cbe7b7fdd76e64eef2244c3d316e8f62eadf2acd607\""
	Nov 19 22:42:20 no-preload-546032 containerd[759]: time="2025-11-19T22:42:20.875414507Z" level=info msg="connecting to shim 305896e4ff26ad59c08b7cbe7b7fdd76e64eef2244c3d316e8f62eadf2acd607" address="unix:///run/containerd/s/e385e2778288e484ddf95f304a6bee3f5cba8439fb72ea54fcc323107831fd99" protocol=ttrpc version=3
	Nov 19 22:42:21 no-preload-546032 containerd[759]: time="2025-11-19T22:42:21.006327448Z" level=info msg="StartContainer for \"305896e4ff26ad59c08b7cbe7b7fdd76e64eef2244c3d316e8f62eadf2acd607\" returns successfully"
	Nov 19 22:42:24 no-preload-546032 containerd[759]: time="2025-11-19T22:42:24.267295631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:17fd3c67-5d08-43d8-88c9-cad71f87f288,Namespace:default,Attempt:0,}"
	Nov 19 22:42:24 no-preload-546032 containerd[759]: time="2025-11-19T22:42:24.364781755Z" level=info msg="connecting to shim 00092e3e8697c5a8701d402d741f7419cff70d174a06f1d2b9db95ec08a23593" address="unix:///run/containerd/s/eeff241e410107f1fbb670cc9e34d15ff140d97f183839038ca72e46cab071cc" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 22:42:24 no-preload-546032 containerd[759]: time="2025-11-19T22:42:24.476061040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:17fd3c67-5d08-43d8-88c9-cad71f87f288,Namespace:default,Attempt:0,} returns sandbox id \"00092e3e8697c5a8701d402d741f7419cff70d174a06f1d2b9db95ec08a23593\""
	Nov 19 22:42:24 no-preload-546032 containerd[759]: time="2025-11-19T22:42:24.480172785Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.565769608Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.569165958Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937184"
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.571294151Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.575533102Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.576332174Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.095986414s"
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.576448491Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.609628388Z" level=info msg="CreateContainer within sandbox \"00092e3e8697c5a8701d402d741f7419cff70d174a06f1d2b9db95ec08a23593\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.649905158Z" level=info msg="Container 2f0ecfa11597ea1ff93fb4679a5bbf8edec0c4fa87fa4a85920400cb2acd4d52: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.660008159Z" level=info msg="CreateContainer within sandbox \"00092e3e8697c5a8701d402d741f7419cff70d174a06f1d2b9db95ec08a23593\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"2f0ecfa11597ea1ff93fb4679a5bbf8edec0c4fa87fa4a85920400cb2acd4d52\""
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.663846835Z" level=info msg="StartContainer for \"2f0ecfa11597ea1ff93fb4679a5bbf8edec0c4fa87fa4a85920400cb2acd4d52\""
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.665198807Z" level=info msg="connecting to shim 2f0ecfa11597ea1ff93fb4679a5bbf8edec0c4fa87fa4a85920400cb2acd4d52" address="unix:///run/containerd/s/eeff241e410107f1fbb670cc9e34d15ff140d97f183839038ca72e46cab071cc" protocol=ttrpc version=3
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.785939764Z" level=info msg="StartContainer for \"2f0ecfa11597ea1ff93fb4679a5bbf8edec0c4fa87fa4a85920400cb2acd4d52\" returns successfully"
	
	
	==> coredns [305896e4ff26ad59c08b7cbe7b7fdd76e64eef2244c3d316e8f62eadf2acd607] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46483 - 26472 "HINFO IN 5349532989424335883.4160873922246323538. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015375216s
	
	
	==> describe nodes <==
	Name:               no-preload-546032
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-546032
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=no-preload-546032
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_42_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:41:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-546032
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:42:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:42:30 +0000   Wed, 19 Nov 2025 22:41:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:42:30 +0000   Wed, 19 Nov 2025 22:41:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:42:30 +0000   Wed, 19 Nov 2025 22:41:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:42:30 +0000   Wed, 19 Nov 2025 22:42:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-546032
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                7be4d512-63e2-4144-b0ab-2366d9b1089a
	  Boot ID:                    b3875353-65b3-44b7-ad72-afadd7e2486a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-zfwqs                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-546032                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-7gnnb                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-546032             250m (12%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-no-preload-546032    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-7jlnv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-546032             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node no-preload-546032 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node no-preload-546032 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     47s (x7 over 47s)  kubelet          Node no-preload-546032 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  47s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  35s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node no-preload-546032 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node no-preload-546032 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node no-preload-546032 status is now: NodeHasSufficientPID
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           31s                node-controller  Node no-preload-546032 event: Registered Node no-preload-546032 in Controller
	  Normal   NodeReady                14s                kubelet          Node no-preload-546032 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 21:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.032038] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[Nov19 21:18] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034282] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.730183] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.763794] kauditd_printk_skb: 36 callbacks suppressed
	[Nov19 21:50] hrtimer: interrupt took 11278311 ns
	
	
	==> etcd [963d0b451828b677633f3cf9f3512f5acaede97c8c243f6d9d108d05932d2ea2] <==
	{"level":"warn","ts":"2025-11-19T22:41:53.276548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.304518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.342512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.394610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.445452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.494221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.526080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.575217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.605224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.632729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.693199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.710658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.731257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.770541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.795341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.830303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.852091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.877239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.888158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.906484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.926418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.948737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.961192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.982598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:54.119998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59650","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:42:34 up  1:24,  0 user,  load average: 6.06, 4.76, 3.46
	Linux no-preload-546032 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1817f9ef386ff37474cb9953a60aa72f13a1d2aade1a34487e3c06379b5be6ab] <==
	I1119 22:42:09.709821       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:42:09.710098       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 22:42:09.710468       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:42:09.710494       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:42:09.710509       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:42:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:42:09.986437       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:42:09.986470       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:42:09.986483       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:42:09.987449       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:42:10.096878       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:42:10.096912       1 metrics.go:72] Registering metrics
	I1119 22:42:10.096978       1 controller.go:711] "Syncing nftables rules"
	I1119 22:42:19.905789       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:42:19.905838       1 main.go:301] handling current node
	I1119 22:42:29.898242       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:42:29.898281       1 main.go:301] handling current node
	
	
	==> kube-apiserver [86fabecca864dcd6fdb14db2f49eddcb4110553304268c1f7f6ec588d04e5458] <==
	I1119 22:41:56.209747       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 22:41:56.270621       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:41:56.296323       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:41:56.313884       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:41:56.433128       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:41:56.435426       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:41:56.435455       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:41:56.752793       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:41:56.774799       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:41:56.778758       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:41:58.146991       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:41:58.212042       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:41:58.364306       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:41:58.388896       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 22:41:58.390827       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:41:58.409575       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:41:58.423599       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:41:59.133520       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:41:59.204205       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:41:59.256085       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:42:03.688863       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:42:04.178429       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:42:04.192846       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:42:04.777780       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1119 22:42:33.214860       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:41988: use of closed network connection
	
	
	==> kube-controller-manager [7a6b0a4bef8a81c1653924cbb995686864a434c6e182ee1faf302b3d12a5b79e] <==
	I1119 22:42:03.558760       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:42:03.559141       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 22:42:03.559296       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 22:42:03.560646       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 22:42:03.560827       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 22:42:03.561061       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 22:42:03.561276       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 22:42:03.562482       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 22:42:03.569183       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 22:42:03.569470       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:42:03.569558       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:42:03.569668       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:42:03.571832       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 22:42:03.572187       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:42:03.573225       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:42:03.573748       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:42:03.573763       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 22:42:03.573772       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 22:42:03.572291       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 22:42:03.586227       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 22:42:03.586803       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 22:42:03.610265       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 22:42:03.620677       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:42:03.644016       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:42:23.539513       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [366ecd30b59e8ab987f905dbbe05674abe0548c446a37ebebce4e4de6c4f24e1] <==
	I1119 22:42:06.578528       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:42:06.719878       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:42:06.820415       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:42:06.820456       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 22:42:06.820547       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:42:06.883009       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:42:06.883064       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:42:06.893027       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:42:06.893599       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:42:06.893640       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:42:06.895047       1 config.go:200] "Starting service config controller"
	I1119 22:42:06.895065       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:42:06.895082       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:42:06.895086       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:42:06.895123       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:42:06.895138       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:42:06.905800       1 config.go:309] "Starting node config controller"
	I1119 22:42:06.905826       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:42:06.905835       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:42:06.998268       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:42:06.998314       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:42:06.998353       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [53d89767af4c502939ee0041c31c7112c85e41e785f20e08211c8368a1f87472] <==
	E1119 22:41:56.704877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1119 22:41:56.717261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:41:56.717371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:41:56.727778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:41:56.728156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:41:56.728327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:41:56.728377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:41:56.728601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:41:56.728699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:41:56.728847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:41:56.728931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:41:56.729114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:41:56.729224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:41:56.729354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:41:56.729454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:41:56.729626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:41:56.729742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:41:56.734650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:41:57.600710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:41:57.620451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:41:57.658610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:41:57.662507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:41:57.671872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:41:57.726517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1119 22:41:58.296772       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:42:00 no-preload-546032 kubelet[2107]: I1119 22:42:00.529534    2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-546032" podStartSLOduration=1.5295134940000001 podStartE2EDuration="1.529513494s" podCreationTimestamp="2025-11-19 22:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:42:00.524774928 +0000 UTC m=+1.476283982" watchObservedRunningTime="2025-11-19 22:42:00.529513494 +0000 UTC m=+1.481022549"
	Nov 19 22:42:00 no-preload-546032 kubelet[2107]: I1119 22:42:00.544641    2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-546032" podStartSLOduration=1.544491432 podStartE2EDuration="1.544491432s" podCreationTimestamp="2025-11-19 22:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:42:00.544501123 +0000 UTC m=+1.496010177" watchObservedRunningTime="2025-11-19 22:42:00.544491432 +0000 UTC m=+1.496000577"
	Nov 19 22:42:00 no-preload-546032 kubelet[2107]: I1119 22:42:00.586577    2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-546032" podStartSLOduration=1.586555121 podStartE2EDuration="1.586555121s" podCreationTimestamp="2025-11-19 22:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:42:00.565223996 +0000 UTC m=+1.516733247" watchObservedRunningTime="2025-11-19 22:42:00.586555121 +0000 UTC m=+1.538064175"
	Nov 19 22:42:03 no-preload-546032 kubelet[2107]: I1119 22:42:03.639833    2107 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 22:42:03 no-preload-546032 kubelet[2107]: I1119 22:42:03.641464    2107 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 22:42:05 no-preload-546032 kubelet[2107]: I1119 22:42:05.170446    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7af96874-cf7c-4c21-af70-1a42d5dda694-kube-proxy\") pod \"kube-proxy-7jlnv\" (UID: \"7af96874-cf7c-4c21-af70-1a42d5dda694\") " pod="kube-system/kube-proxy-7jlnv"
	Nov 19 22:42:05 no-preload-546032 kubelet[2107]: I1119 22:42:05.170492    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7af96874-cf7c-4c21-af70-1a42d5dda694-xtables-lock\") pod \"kube-proxy-7jlnv\" (UID: \"7af96874-cf7c-4c21-af70-1a42d5dda694\") " pod="kube-system/kube-proxy-7jlnv"
	Nov 19 22:42:05 no-preload-546032 kubelet[2107]: I1119 22:42:05.170510    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7af96874-cf7c-4c21-af70-1a42d5dda694-lib-modules\") pod \"kube-proxy-7jlnv\" (UID: \"7af96874-cf7c-4c21-af70-1a42d5dda694\") " pod="kube-system/kube-proxy-7jlnv"
	Nov 19 22:42:05 no-preload-546032 kubelet[2107]: I1119 22:42:05.170532    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7pdb\" (UniqueName: \"kubernetes.io/projected/7af96874-cf7c-4c21-af70-1a42d5dda694-kube-api-access-p7pdb\") pod \"kube-proxy-7jlnv\" (UID: \"7af96874-cf7c-4c21-af70-1a42d5dda694\") " pod="kube-system/kube-proxy-7jlnv"
	Nov 19 22:42:05 no-preload-546032 kubelet[2107]: I1119 22:42:05.332117    2107 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 19 22:42:05 no-preload-546032 kubelet[2107]: I1119 22:42:05.374310    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae0ef6bc-60b3-4ecb-9560-3f1b68b52283-xtables-lock\") pod \"kindnet-7gnnb\" (UID: \"ae0ef6bc-60b3-4ecb-9560-3f1b68b52283\") " pod="kube-system/kindnet-7gnnb"
	Nov 19 22:42:05 no-preload-546032 kubelet[2107]: I1119 22:42:05.374362    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ae0ef6bc-60b3-4ecb-9560-3f1b68b52283-cni-cfg\") pod \"kindnet-7gnnb\" (UID: \"ae0ef6bc-60b3-4ecb-9560-3f1b68b52283\") " pod="kube-system/kindnet-7gnnb"
	Nov 19 22:42:05 no-preload-546032 kubelet[2107]: I1119 22:42:05.374382    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae0ef6bc-60b3-4ecb-9560-3f1b68b52283-lib-modules\") pod \"kindnet-7gnnb\" (UID: \"ae0ef6bc-60b3-4ecb-9560-3f1b68b52283\") " pod="kube-system/kindnet-7gnnb"
	Nov 19 22:42:05 no-preload-546032 kubelet[2107]: I1119 22:42:05.374404    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qnvg\" (UniqueName: \"kubernetes.io/projected/ae0ef6bc-60b3-4ecb-9560-3f1b68b52283-kube-api-access-2qnvg\") pod \"kindnet-7gnnb\" (UID: \"ae0ef6bc-60b3-4ecb-9560-3f1b68b52283\") " pod="kube-system/kindnet-7gnnb"
	Nov 19 22:42:06 no-preload-546032 kubelet[2107]: I1119 22:42:06.545104    2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7jlnv" podStartSLOduration=2.545086112 podStartE2EDuration="2.545086112s" podCreationTimestamp="2025-11-19 22:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:42:06.545047293 +0000 UTC m=+7.496556347" watchObservedRunningTime="2025-11-19 22:42:06.545086112 +0000 UTC m=+7.496595166"
	Nov 19 22:42:20 no-preload-546032 kubelet[2107]: I1119 22:42:20.007087    2107 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 22:42:20 no-preload-546032 kubelet[2107]: I1119 22:42:20.060816    2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7gnnb" podStartSLOduration=12.959261961 podStartE2EDuration="16.060788574s" podCreationTimestamp="2025-11-19 22:42:04 +0000 UTC" firstStartedPulling="2025-11-19 22:42:06.212210174 +0000 UTC m=+7.163719220" lastFinishedPulling="2025-11-19 22:42:09.313736779 +0000 UTC m=+10.265245833" observedRunningTime="2025-11-19 22:42:10.575927583 +0000 UTC m=+11.527436645" watchObservedRunningTime="2025-11-19 22:42:20.060788574 +0000 UTC m=+21.012297628"
	Nov 19 22:42:20 no-preload-546032 kubelet[2107]: I1119 22:42:20.162429    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bca950a-75ef-497a-bf1a-3d9afc453781-config-volume\") pod \"coredns-66bc5c9577-zfwqs\" (UID: \"4bca950a-75ef-497a-bf1a-3d9afc453781\") " pod="kube-system/coredns-66bc5c9577-zfwqs"
	Nov 19 22:42:20 no-preload-546032 kubelet[2107]: I1119 22:42:20.162658    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fe24326d-4109-4df6-84e7-aec86a450201-tmp\") pod \"storage-provisioner\" (UID: \"fe24326d-4109-4df6-84e7-aec86a450201\") " pod="kube-system/storage-provisioner"
	Nov 19 22:42:20 no-preload-546032 kubelet[2107]: I1119 22:42:20.162769    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv47r\" (UniqueName: \"kubernetes.io/projected/fe24326d-4109-4df6-84e7-aec86a450201-kube-api-access-wv47r\") pod \"storage-provisioner\" (UID: \"fe24326d-4109-4df6-84e7-aec86a450201\") " pod="kube-system/storage-provisioner"
	Nov 19 22:42:20 no-preload-546032 kubelet[2107]: I1119 22:42:20.162867    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r58lx\" (UniqueName: \"kubernetes.io/projected/4bca950a-75ef-497a-bf1a-3d9afc453781-kube-api-access-r58lx\") pod \"coredns-66bc5c9577-zfwqs\" (UID: \"4bca950a-75ef-497a-bf1a-3d9afc453781\") " pod="kube-system/coredns-66bc5c9577-zfwqs"
	Nov 19 22:42:21 no-preload-546032 kubelet[2107]: I1119 22:42:21.636785    2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.636757843 podStartE2EDuration="14.636757843s" podCreationTimestamp="2025-11-19 22:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:42:21.615635246 +0000 UTC m=+22.567144309" watchObservedRunningTime="2025-11-19 22:42:21.636757843 +0000 UTC m=+22.588266889"
	Nov 19 22:42:23 no-preload-546032 kubelet[2107]: I1119 22:42:23.953795    2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zfwqs" podStartSLOduration=19.953774784 podStartE2EDuration="19.953774784s" podCreationTimestamp="2025-11-19 22:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:42:21.637842949 +0000 UTC m=+22.589352003" watchObservedRunningTime="2025-11-19 22:42:23.953774784 +0000 UTC m=+24.905283830"
	Nov 19 22:42:24 no-preload-546032 kubelet[2107]: I1119 22:42:24.092708    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq7z2\" (UniqueName: \"kubernetes.io/projected/17fd3c67-5d08-43d8-88c9-cad71f87f288-kube-api-access-pq7z2\") pod \"busybox\" (UID: \"17fd3c67-5d08-43d8-88c9-cad71f87f288\") " pod="default/busybox"
	Nov 19 22:42:27 no-preload-546032 kubelet[2107]: I1119 22:42:27.631260    2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.519130843 podStartE2EDuration="4.631244054s" podCreationTimestamp="2025-11-19 22:42:23 +0000 UTC" firstStartedPulling="2025-11-19 22:42:24.47954608 +0000 UTC m=+25.431055134" lastFinishedPulling="2025-11-19 22:42:26.591659291 +0000 UTC m=+27.543168345" observedRunningTime="2025-11-19 22:42:27.630886471 +0000 UTC m=+28.582395525" watchObservedRunningTime="2025-11-19 22:42:27.631244054 +0000 UTC m=+28.582753099"
	
	
	==> storage-provisioner [3adda05c9140e135ffb5799828979aaf47110911602cd44b2e1fac6b165378f9] <==
	I1119 22:42:20.815937       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:42:20.853631       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:42:20.853696       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:42:20.865523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:20.876191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:42:20.876334       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:42:20.876484       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-546032_d15bd771-27d6-440d-bde6-9bac34d15a3c!
	I1119 22:42:20.877402       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a162edd-3d45-4825-bcaf-625808233e67", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-546032_d15bd771-27d6-440d-bde6-9bac34d15a3c became leader
	W1119 22:42:20.894441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:20.910410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:42:20.981297       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-546032_d15bd771-27d6-440d-bde6-9bac34d15a3c!
	W1119 22:42:22.913370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:22.919344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:24.922506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:24.930102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:26.934566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:26.943053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:28.946415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:28.951534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:30.955995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:30.961537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:32.964808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:32.973763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:34.977564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:34.994471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-546032 -n no-preload-546032
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-546032 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-546032
helpers_test.go:243: (dbg) docker inspect no-preload-546032:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5b0cbe10f040fb0ab16c715f414301bd2ae525fa4ea913ecec64d62880b2702b",
	        "Created": "2025-11-19T22:41:08.746470937Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 230046,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:41:08.823161158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/5b0cbe10f040fb0ab16c715f414301bd2ae525fa4ea913ecec64d62880b2702b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5b0cbe10f040fb0ab16c715f414301bd2ae525fa4ea913ecec64d62880b2702b/hostname",
	        "HostsPath": "/var/lib/docker/containers/5b0cbe10f040fb0ab16c715f414301bd2ae525fa4ea913ecec64d62880b2702b/hosts",
	        "LogPath": "/var/lib/docker/containers/5b0cbe10f040fb0ab16c715f414301bd2ae525fa4ea913ecec64d62880b2702b/5b0cbe10f040fb0ab16c715f414301bd2ae525fa4ea913ecec64d62880b2702b-json.log",
	        "Name": "/no-preload-546032",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-546032:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-546032",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5b0cbe10f040fb0ab16c715f414301bd2ae525fa4ea913ecec64d62880b2702b",
	                "LowerDir": "/var/lib/docker/overlay2/1d841c8585429c8f1d31118c3dc242e39657934f28ccc1982fcb0b12edf19a8d-init/diff:/var/lib/docker/overlay2/b6ebc9601ea0ae08484f263713f3358dd93f7748ebfafbd9155229908dee9606/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1d841c8585429c8f1d31118c3dc242e39657934f28ccc1982fcb0b12edf19a8d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1d841c8585429c8f1d31118c3dc242e39657934f28ccc1982fcb0b12edf19a8d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1d841c8585429c8f1d31118c3dc242e39657934f28ccc1982fcb0b12edf19a8d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-546032",
	                "Source": "/var/lib/docker/volumes/no-preload-546032/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-546032",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-546032",
	                "name.minikube.sigs.k8s.io": "no-preload-546032",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b75d9f517bb1c633ebea44ace499ed9447f9ab1045e962fe14ee8c8296fa724",
	            "SandboxKey": "/var/run/docker/netns/9b75d9f517bb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-546032": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:b6:18:d7:fe:29",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "195e3ba18813077b9cf1f8edba491f377c674c0a95616e6e54cf04871c173ac3",
	                    "EndpointID": "e438a178494a65df8afebe830a0c577f453ae1326761f44911cd433af25d3297",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-546032",
	                        "5b0cbe10f040"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-546032 -n no-preload-546032
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-546032 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-546032 logs -n 25: (1.531595933s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable dashboard -p embed-certs-227235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-227235           │ jenkins │ v1.37.0 │ 19 Nov 25 22:40 UTC │ 19 Nov 25 22:40 UTC │
	│ start   │ -p embed-certs-227235 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-227235           │ jenkins │ v1.37.0 │ 19 Nov 25 22:40 UTC │ 19 Nov 25 22:41 UTC │
	│ image   │ default-k8s-diff-port-570856 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-570856 │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ pause   │ -p default-k8s-diff-port-570856 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-570856 │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ unpause │ -p default-k8s-diff-port-570856 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-570856 │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ delete  │ -p default-k8s-diff-port-570856                                                                                                                                                                                                                     │ default-k8s-diff-port-570856 │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ delete  │ -p default-k8s-diff-port-570856                                                                                                                                                                                                                     │ default-k8s-diff-port-570856 │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ delete  │ -p disable-driver-mounts-063316                                                                                                                                                                                                                     │ disable-driver-mounts-063316 │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ start   │ -p no-preload-546032 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-546032            │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:42 UTC │
	│ image   │ embed-certs-227235 image list --format=json                                                                                                                                                                                                         │ embed-certs-227235           │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ pause   │ -p embed-certs-227235 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-227235           │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ unpause │ -p embed-certs-227235 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-227235           │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ delete  │ -p embed-certs-227235                                                                                                                                                                                                                               │ embed-certs-227235           │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ delete  │ -p embed-certs-227235                                                                                                                                                                                                                               │ embed-certs-227235           │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:41 UTC │
	│ start   │ -p newest-cni-616827 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-616827            │ jenkins │ v1.37.0 │ 19 Nov 25 22:41 UTC │ 19 Nov 25 22:42 UTC │
	│ addons  │ enable metrics-server -p newest-cni-616827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-616827            │ jenkins │ v1.37.0 │ 19 Nov 25 22:42 UTC │ 19 Nov 25 22:42 UTC │
	│ stop    │ -p newest-cni-616827 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-616827            │ jenkins │ v1.37.0 │ 19 Nov 25 22:42 UTC │ 19 Nov 25 22:42 UTC │
	│ addons  │ enable dashboard -p newest-cni-616827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-616827            │ jenkins │ v1.37.0 │ 19 Nov 25 22:42 UTC │ 19 Nov 25 22:42 UTC │
	│ start   │ -p newest-cni-616827 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-616827            │ jenkins │ v1.37.0 │ 19 Nov 25 22:42 UTC │ 19 Nov 25 22:42 UTC │
	│ image   │ newest-cni-616827 image list --format=json                                                                                                                                                                                                          │ newest-cni-616827            │ jenkins │ v1.37.0 │ 19 Nov 25 22:42 UTC │ 19 Nov 25 22:42 UTC │
	│ pause   │ -p newest-cni-616827 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-616827            │ jenkins │ v1.37.0 │ 19 Nov 25 22:42 UTC │ 19 Nov 25 22:42 UTC │
	│ unpause │ -p newest-cni-616827 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-616827            │ jenkins │ v1.37.0 │ 19 Nov 25 22:42 UTC │ 19 Nov 25 22:42 UTC │
	│ delete  │ -p newest-cni-616827                                                                                                                                                                                                                                │ newest-cni-616827            │ jenkins │ v1.37.0 │ 19 Nov 25 22:42 UTC │ 19 Nov 25 22:42 UTC │
	│ delete  │ -p newest-cni-616827                                                                                                                                                                                                                                │ newest-cni-616827            │ jenkins │ v1.37.0 │ 19 Nov 25 22:42 UTC │ 19 Nov 25 22:42 UTC │
	│ start   │ -p auto-156590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-156590                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:42 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:42:36
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:42:36.142006  240877 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:42:36.142125  240877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:42:36.142131  240877 out.go:374] Setting ErrFile to fd 2...
	I1119 22:42:36.142305  240877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:42:36.142614  240877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
	I1119 22:42:36.143136  240877 out.go:368] Setting JSON to false
	I1119 22:42:36.144097  240877 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5077,"bootTime":1763587079,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1119 22:42:36.144459  240877 start.go:143] virtualization:  
	I1119 22:42:36.148460  240877 out.go:179] * [auto-156590] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:42:36.152014  240877 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:42:36.152180  240877 notify.go:221] Checking for updates...
	I1119 22:42:36.158932  240877 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:42:36.161997  240877 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:42:36.165478  240877 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube
	I1119 22:42:36.168591  240877 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:42:36.172373  240877 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:42:36.175778  240877 config.go:182] Loaded profile config "no-preload-546032": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:42:36.175907  240877 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:42:36.214269  240877 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:42:36.214407  240877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:42:36.319536  240877 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 22:42:36.310085915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:42:36.319643  240877 docker.go:319] overlay module found
	I1119 22:42:36.322882  240877 out.go:179] * Using the docker driver based on user configuration
	I1119 22:42:36.325877  240877 start.go:309] selected driver: docker
	I1119 22:42:36.325898  240877 start.go:930] validating driver "docker" against <nil>
	I1119 22:42:36.325912  240877 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:42:36.326756  240877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:42:36.427796  240877 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 22:42:36.417842724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:42:36.427958  240877 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:42:36.428182  240877 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:42:36.431201  240877 out.go:179] * Using Docker driver with root privileges
	I1119 22:42:36.434133  240877 cni.go:84] Creating CNI manager for ""
	I1119 22:42:36.434236  240877 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 22:42:36.434246  240877 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:42:36.434339  240877 start.go:353] cluster config:
	{Name:auto-156590 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-156590 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:42:36.437475  240877 out.go:179] * Starting "auto-156590" primary control-plane node in "auto-156590" cluster
	I1119 22:42:36.440270  240877 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 22:42:36.443339  240877 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:42:36.446256  240877 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 22:42:36.446307  240877 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1119 22:42:36.446318  240877 cache.go:65] Caching tarball of preloaded images
	I1119 22:42:36.446326  240877 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:42:36.446413  240877 preload.go:238] Found /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1119 22:42:36.446428  240877 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1119 22:42:36.446544  240877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/auto-156590/config.json ...
	I1119 22:42:36.446563  240877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/auto-156590/config.json: {Name:mk141e49419eb177106fc8dc4bcb216f2b4e59f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:42:36.483890  240877 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:42:36.483916  240877 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:42:36.483929  240877 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:42:36.483959  240877 start.go:360] acquireMachinesLock for auto-156590: {Name:mka0b54043273aaa8e586991fdc697b5b9c25f9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:42:36.484129  240877 start.go:364] duration metric: took 152.412µs to acquireMachinesLock for "auto-156590"
	I1119 22:42:36.484177  240877 start.go:93] Provisioning new machine with config: &{Name:auto-156590 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-156590 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 22:42:36.484287  240877 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	2f0ecfa11597e       1611cd07b61d5       10 seconds ago      Running             busybox                   0                   00092e3e8697c       busybox                                     default
	305896e4ff26a       138784d87c9c5       16 seconds ago      Running             coredns                   0                   ed89407fa05c0       coredns-66bc5c9577-zfwqs                    kube-system
	3adda05c9140e       66749159455b3       16 seconds ago      Running             storage-provisioner       0                   ed7deef8a9d86       storage-provisioner                         kube-system
	1817f9ef386ff       b1a8c6f707935       27 seconds ago      Running             kindnet-cni               0                   569fc7b5bcfde       kindnet-7gnnb                               kube-system
	366ecd30b59e8       05baa95f5142d       31 seconds ago      Running             kube-proxy                0                   96ee72b63e11b       kube-proxy-7jlnv                            kube-system
	7a6b0a4bef8a8       7eb2c6ff0c5a7       48 seconds ago      Running             kube-controller-manager   0                   3fe009683e8d1       kube-controller-manager-no-preload-546032   kube-system
	53d89767af4c5       b5f57ec6b9867       48 seconds ago      Running             kube-scheduler            0                   766bf29276159       kube-scheduler-no-preload-546032            kube-system
	86fabecca864d       43911e833d64d       48 seconds ago      Running             kube-apiserver            0                   4d44fcc92ea38       kube-apiserver-no-preload-546032            kube-system
	963d0b451828b       a1894772a478e       48 seconds ago      Running             etcd                      0                   9aeaaf9a415e7       etcd-no-preload-546032                      kube-system
	
	
	==> containerd <==
	Nov 19 22:42:20 no-preload-546032 containerd[759]: time="2025-11-19T22:42:20.625312079Z" level=info msg="connecting to shim 3adda05c9140e135ffb5799828979aaf47110911602cd44b2e1fac6b165378f9" address="unix:///run/containerd/s/dfb78c2a7db66c9410c7a609a5d91d12363e689b0b26198b0734b0efa33591d9" protocol=ttrpc version=3
	Nov 19 22:42:20 no-preload-546032 containerd[759]: time="2025-11-19T22:42:20.762718910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zfwqs,Uid:4bca950a-75ef-497a-bf1a-3d9afc453781,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed89407fa05c01b926d57532886c4509f1c092efc35108c99827357bff644b3f\""
	Nov 19 22:42:20 no-preload-546032 containerd[759]: time="2025-11-19T22:42:20.781969513Z" level=info msg="CreateContainer within sandbox \"ed89407fa05c01b926d57532886c4509f1c092efc35108c99827357bff644b3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 19 22:42:20 no-preload-546032 containerd[759]: time="2025-11-19T22:42:20.840508849Z" level=info msg="StartContainer for \"3adda05c9140e135ffb5799828979aaf47110911602cd44b2e1fac6b165378f9\" returns successfully"
	Nov 19 22:42:20 no-preload-546032 containerd[759]: time="2025-11-19T22:42:20.853960321Z" level=info msg="Container 305896e4ff26ad59c08b7cbe7b7fdd76e64eef2244c3d316e8f62eadf2acd607: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:42:20 no-preload-546032 containerd[759]: time="2025-11-19T22:42:20.871154542Z" level=info msg="CreateContainer within sandbox \"ed89407fa05c01b926d57532886c4509f1c092efc35108c99827357bff644b3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"305896e4ff26ad59c08b7cbe7b7fdd76e64eef2244c3d316e8f62eadf2acd607\""
	Nov 19 22:42:20 no-preload-546032 containerd[759]: time="2025-11-19T22:42:20.874353114Z" level=info msg="StartContainer for \"305896e4ff26ad59c08b7cbe7b7fdd76e64eef2244c3d316e8f62eadf2acd607\""
	Nov 19 22:42:20 no-preload-546032 containerd[759]: time="2025-11-19T22:42:20.875414507Z" level=info msg="connecting to shim 305896e4ff26ad59c08b7cbe7b7fdd76e64eef2244c3d316e8f62eadf2acd607" address="unix:///run/containerd/s/e385e2778288e484ddf95f304a6bee3f5cba8439fb72ea54fcc323107831fd99" protocol=ttrpc version=3
	Nov 19 22:42:21 no-preload-546032 containerd[759]: time="2025-11-19T22:42:21.006327448Z" level=info msg="StartContainer for \"305896e4ff26ad59c08b7cbe7b7fdd76e64eef2244c3d316e8f62eadf2acd607\" returns successfully"
	Nov 19 22:42:24 no-preload-546032 containerd[759]: time="2025-11-19T22:42:24.267295631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:17fd3c67-5d08-43d8-88c9-cad71f87f288,Namespace:default,Attempt:0,}"
	Nov 19 22:42:24 no-preload-546032 containerd[759]: time="2025-11-19T22:42:24.364781755Z" level=info msg="connecting to shim 00092e3e8697c5a8701d402d741f7419cff70d174a06f1d2b9db95ec08a23593" address="unix:///run/containerd/s/eeff241e410107f1fbb670cc9e34d15ff140d97f183839038ca72e46cab071cc" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 22:42:24 no-preload-546032 containerd[759]: time="2025-11-19T22:42:24.476061040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:17fd3c67-5d08-43d8-88c9-cad71f87f288,Namespace:default,Attempt:0,} returns sandbox id \"00092e3e8697c5a8701d402d741f7419cff70d174a06f1d2b9db95ec08a23593\""
	Nov 19 22:42:24 no-preload-546032 containerd[759]: time="2025-11-19T22:42:24.480172785Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.565769608Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.569165958Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937184"
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.571294151Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.575533102Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.576332174Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.095986414s"
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.576448491Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.609628388Z" level=info msg="CreateContainer within sandbox \"00092e3e8697c5a8701d402d741f7419cff70d174a06f1d2b9db95ec08a23593\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.649905158Z" level=info msg="Container 2f0ecfa11597ea1ff93fb4679a5bbf8edec0c4fa87fa4a85920400cb2acd4d52: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.660008159Z" level=info msg="CreateContainer within sandbox \"00092e3e8697c5a8701d402d741f7419cff70d174a06f1d2b9db95ec08a23593\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"2f0ecfa11597ea1ff93fb4679a5bbf8edec0c4fa87fa4a85920400cb2acd4d52\""
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.663846835Z" level=info msg="StartContainer for \"2f0ecfa11597ea1ff93fb4679a5bbf8edec0c4fa87fa4a85920400cb2acd4d52\""
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.665198807Z" level=info msg="connecting to shim 2f0ecfa11597ea1ff93fb4679a5bbf8edec0c4fa87fa4a85920400cb2acd4d52" address="unix:///run/containerd/s/eeff241e410107f1fbb670cc9e34d15ff140d97f183839038ca72e46cab071cc" protocol=ttrpc version=3
	Nov 19 22:42:26 no-preload-546032 containerd[759]: time="2025-11-19T22:42:26.785939764Z" level=info msg="StartContainer for \"2f0ecfa11597ea1ff93fb4679a5bbf8edec0c4fa87fa4a85920400cb2acd4d52\" returns successfully"
	
	
	==> coredns [305896e4ff26ad59c08b7cbe7b7fdd76e64eef2244c3d316e8f62eadf2acd607] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46483 - 26472 "HINFO IN 5349532989424335883.4160873922246323538. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015375216s
	
	
	==> describe nodes <==
	Name:               no-preload-546032
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-546032
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=no-preload-546032
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_42_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:41:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-546032
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:42:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:42:30 +0000   Wed, 19 Nov 2025 22:41:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:42:30 +0000   Wed, 19 Nov 2025 22:41:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:42:30 +0000   Wed, 19 Nov 2025 22:41:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:42:30 +0000   Wed, 19 Nov 2025 22:42:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-546032
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                7be4d512-63e2-4144-b0ab-2366d9b1089a
	  Boot ID:                    b3875353-65b3-44b7-ad72-afadd7e2486a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 coredns-66bc5c9577-zfwqs                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     33s
	  kube-system                 etcd-no-preload-546032                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         38s
	  kube-system                 kindnet-7gnnb                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-no-preload-546032             250m (12%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-no-preload-546032    200m (10%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-7jlnv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-no-preload-546032             100m (5%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 30s                kube-proxy       
	  Normal   NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node no-preload-546032 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node no-preload-546032 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     50s (x7 over 50s)  kubelet          Node no-preload-546032 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  50s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 38s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  38s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  38s                kubelet          Node no-preload-546032 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    38s                kubelet          Node no-preload-546032 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     38s                kubelet          Node no-preload-546032 status is now: NodeHasSufficientPID
	  Normal   Starting                 38s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           34s                node-controller  Node no-preload-546032 event: Registered Node no-preload-546032 in Controller
	  Normal   NodeReady                17s                kubelet          Node no-preload-546032 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 21:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.032038] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[Nov19 21:18] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034282] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.730183] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.763794] kauditd_printk_skb: 36 callbacks suppressed
	[Nov19 21:50] hrtimer: interrupt took 11278311 ns
	
	
	==> etcd [963d0b451828b677633f3cf9f3512f5acaede97c8c243f6d9d108d05932d2ea2] <==
	{"level":"warn","ts":"2025-11-19T22:41:53.276548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.304518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.342512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.394610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.445452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.494221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.526080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.575217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.605224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.632729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.693199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.710658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.731257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.770541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.795341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.830303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.852091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.877239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.888158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.906484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.926418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.948737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.961192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:53.982598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:41:54.119998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59650","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:42:37 up  1:24,  0 user,  load average: 6.06, 4.76, 3.46
	Linux no-preload-546032 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1817f9ef386ff37474cb9953a60aa72f13a1d2aade1a34487e3c06379b5be6ab] <==
	I1119 22:42:09.709821       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:42:09.710098       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 22:42:09.710468       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:42:09.710494       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:42:09.710509       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:42:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:42:09.986437       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:42:09.986470       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:42:09.986483       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:42:09.987449       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:42:10.096878       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:42:10.096912       1 metrics.go:72] Registering metrics
	I1119 22:42:10.096978       1 controller.go:711] "Syncing nftables rules"
	I1119 22:42:19.905789       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:42:19.905838       1 main.go:301] handling current node
	I1119 22:42:29.898242       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:42:29.898281       1 main.go:301] handling current node
	
	
	==> kube-apiserver [86fabecca864dcd6fdb14db2f49eddcb4110553304268c1f7f6ec588d04e5458] <==
	I1119 22:41:56.209747       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 22:41:56.270621       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:41:56.296323       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:41:56.313884       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:41:56.433128       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:41:56.435426       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:41:56.435455       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:41:56.752793       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:41:56.774799       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:41:56.778758       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:41:58.146991       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:41:58.212042       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:41:58.364306       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:41:58.388896       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 22:41:58.390827       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:41:58.409575       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:41:58.423599       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:41:59.133520       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:41:59.204205       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:41:59.256085       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:42:03.688863       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:42:04.178429       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:42:04.192846       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:42:04.777780       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1119 22:42:33.214860       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:41988: use of closed network connection
	
	
	==> kube-controller-manager [7a6b0a4bef8a81c1653924cbb995686864a434c6e182ee1faf302b3d12a5b79e] <==
	I1119 22:42:03.558760       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:42:03.559141       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 22:42:03.559296       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 22:42:03.560646       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 22:42:03.560827       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 22:42:03.561061       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 22:42:03.561276       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 22:42:03.562482       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 22:42:03.569183       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 22:42:03.569470       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:42:03.569558       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:42:03.569668       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:42:03.571832       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 22:42:03.572187       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:42:03.573225       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:42:03.573748       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:42:03.573763       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 22:42:03.573772       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 22:42:03.572291       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 22:42:03.586227       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 22:42:03.586803       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 22:42:03.610265       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 22:42:03.620677       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:42:03.644016       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:42:23.539513       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [366ecd30b59e8ab987f905dbbe05674abe0548c446a37ebebce4e4de6c4f24e1] <==
	I1119 22:42:06.578528       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:42:06.719878       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:42:06.820415       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:42:06.820456       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 22:42:06.820547       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:42:06.883009       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:42:06.883064       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:42:06.893027       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:42:06.893599       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:42:06.893640       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:42:06.895047       1 config.go:200] "Starting service config controller"
	I1119 22:42:06.895065       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:42:06.895082       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:42:06.895086       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:42:06.895123       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:42:06.895138       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:42:06.905800       1 config.go:309] "Starting node config controller"
	I1119 22:42:06.905826       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:42:06.905835       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:42:06.998268       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:42:06.998314       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:42:06.998353       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [53d89767af4c502939ee0041c31c7112c85e41e785f20e08211c8368a1f87472] <==
	E1119 22:41:56.704877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1119 22:41:56.717261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:41:56.717371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:41:56.727778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:41:56.728156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:41:56.728327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:41:56.728377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:41:56.728601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:41:56.728699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:41:56.728847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:41:56.728931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:41:56.729114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:41:56.729224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:41:56.729354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:41:56.729454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:41:56.729626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:41:56.729742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:41:56.734650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:41:57.600710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:41:57.620451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:41:57.658610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:41:57.662507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:41:57.671872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:41:57.726517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1119 22:41:58.296772       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:42:00 no-preload-546032 kubelet[2107]: I1119 22:42:00.529534    2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-546032" podStartSLOduration=1.5295134940000001 podStartE2EDuration="1.529513494s" podCreationTimestamp="2025-11-19 22:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:42:00.524774928 +0000 UTC m=+1.476283982" watchObservedRunningTime="2025-11-19 22:42:00.529513494 +0000 UTC m=+1.481022549"
	Nov 19 22:42:00 no-preload-546032 kubelet[2107]: I1119 22:42:00.544641    2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-546032" podStartSLOduration=1.544491432 podStartE2EDuration="1.544491432s" podCreationTimestamp="2025-11-19 22:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:42:00.544501123 +0000 UTC m=+1.496010177" watchObservedRunningTime="2025-11-19 22:42:00.544491432 +0000 UTC m=+1.496000577"
	Nov 19 22:42:00 no-preload-546032 kubelet[2107]: I1119 22:42:00.586577    2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-546032" podStartSLOduration=1.586555121 podStartE2EDuration="1.586555121s" podCreationTimestamp="2025-11-19 22:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:42:00.565223996 +0000 UTC m=+1.516733247" watchObservedRunningTime="2025-11-19 22:42:00.586555121 +0000 UTC m=+1.538064175"
	Nov 19 22:42:03 no-preload-546032 kubelet[2107]: I1119 22:42:03.639833    2107 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 22:42:03 no-preload-546032 kubelet[2107]: I1119 22:42:03.641464    2107 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 22:42:05 no-preload-546032 kubelet[2107]: I1119 22:42:05.170446    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7af96874-cf7c-4c21-af70-1a42d5dda694-kube-proxy\") pod \"kube-proxy-7jlnv\" (UID: \"7af96874-cf7c-4c21-af70-1a42d5dda694\") " pod="kube-system/kube-proxy-7jlnv"
	Nov 19 22:42:05 no-preload-546032 kubelet[2107]: I1119 22:42:05.170492    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7af96874-cf7c-4c21-af70-1a42d5dda694-xtables-lock\") pod \"kube-proxy-7jlnv\" (UID: \"7af96874-cf7c-4c21-af70-1a42d5dda694\") " pod="kube-system/kube-proxy-7jlnv"
	Nov 19 22:42:05 no-preload-546032 kubelet[2107]: I1119 22:42:05.170510    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7af96874-cf7c-4c21-af70-1a42d5dda694-lib-modules\") pod \"kube-proxy-7jlnv\" (UID: \"7af96874-cf7c-4c21-af70-1a42d5dda694\") " pod="kube-system/kube-proxy-7jlnv"
	Nov 19 22:42:05 no-preload-546032 kubelet[2107]: I1119 22:42:05.170532    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7pdb\" (UniqueName: \"kubernetes.io/projected/7af96874-cf7c-4c21-af70-1a42d5dda694-kube-api-access-p7pdb\") pod \"kube-proxy-7jlnv\" (UID: \"7af96874-cf7c-4c21-af70-1a42d5dda694\") " pod="kube-system/kube-proxy-7jlnv"
	Nov 19 22:42:05 no-preload-546032 kubelet[2107]: I1119 22:42:05.332117    2107 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 19 22:42:05 no-preload-546032 kubelet[2107]: I1119 22:42:05.374310    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae0ef6bc-60b3-4ecb-9560-3f1b68b52283-xtables-lock\") pod \"kindnet-7gnnb\" (UID: \"ae0ef6bc-60b3-4ecb-9560-3f1b68b52283\") " pod="kube-system/kindnet-7gnnb"
	Nov 19 22:42:05 no-preload-546032 kubelet[2107]: I1119 22:42:05.374362    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ae0ef6bc-60b3-4ecb-9560-3f1b68b52283-cni-cfg\") pod \"kindnet-7gnnb\" (UID: \"ae0ef6bc-60b3-4ecb-9560-3f1b68b52283\") " pod="kube-system/kindnet-7gnnb"
	Nov 19 22:42:05 no-preload-546032 kubelet[2107]: I1119 22:42:05.374382    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae0ef6bc-60b3-4ecb-9560-3f1b68b52283-lib-modules\") pod \"kindnet-7gnnb\" (UID: \"ae0ef6bc-60b3-4ecb-9560-3f1b68b52283\") " pod="kube-system/kindnet-7gnnb"
	Nov 19 22:42:05 no-preload-546032 kubelet[2107]: I1119 22:42:05.374404    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qnvg\" (UniqueName: \"kubernetes.io/projected/ae0ef6bc-60b3-4ecb-9560-3f1b68b52283-kube-api-access-2qnvg\") pod \"kindnet-7gnnb\" (UID: \"ae0ef6bc-60b3-4ecb-9560-3f1b68b52283\") " pod="kube-system/kindnet-7gnnb"
	Nov 19 22:42:06 no-preload-546032 kubelet[2107]: I1119 22:42:06.545104    2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7jlnv" podStartSLOduration=2.545086112 podStartE2EDuration="2.545086112s" podCreationTimestamp="2025-11-19 22:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:42:06.545047293 +0000 UTC m=+7.496556347" watchObservedRunningTime="2025-11-19 22:42:06.545086112 +0000 UTC m=+7.496595166"
	Nov 19 22:42:20 no-preload-546032 kubelet[2107]: I1119 22:42:20.007087    2107 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 22:42:20 no-preload-546032 kubelet[2107]: I1119 22:42:20.060816    2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7gnnb" podStartSLOduration=12.959261961 podStartE2EDuration="16.060788574s" podCreationTimestamp="2025-11-19 22:42:04 +0000 UTC" firstStartedPulling="2025-11-19 22:42:06.212210174 +0000 UTC m=+7.163719220" lastFinishedPulling="2025-11-19 22:42:09.313736779 +0000 UTC m=+10.265245833" observedRunningTime="2025-11-19 22:42:10.575927583 +0000 UTC m=+11.527436645" watchObservedRunningTime="2025-11-19 22:42:20.060788574 +0000 UTC m=+21.012297628"
	Nov 19 22:42:20 no-preload-546032 kubelet[2107]: I1119 22:42:20.162429    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bca950a-75ef-497a-bf1a-3d9afc453781-config-volume\") pod \"coredns-66bc5c9577-zfwqs\" (UID: \"4bca950a-75ef-497a-bf1a-3d9afc453781\") " pod="kube-system/coredns-66bc5c9577-zfwqs"
	Nov 19 22:42:20 no-preload-546032 kubelet[2107]: I1119 22:42:20.162658    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fe24326d-4109-4df6-84e7-aec86a450201-tmp\") pod \"storage-provisioner\" (UID: \"fe24326d-4109-4df6-84e7-aec86a450201\") " pod="kube-system/storage-provisioner"
	Nov 19 22:42:20 no-preload-546032 kubelet[2107]: I1119 22:42:20.162769    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv47r\" (UniqueName: \"kubernetes.io/projected/fe24326d-4109-4df6-84e7-aec86a450201-kube-api-access-wv47r\") pod \"storage-provisioner\" (UID: \"fe24326d-4109-4df6-84e7-aec86a450201\") " pod="kube-system/storage-provisioner"
	Nov 19 22:42:20 no-preload-546032 kubelet[2107]: I1119 22:42:20.162867    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r58lx\" (UniqueName: \"kubernetes.io/projected/4bca950a-75ef-497a-bf1a-3d9afc453781-kube-api-access-r58lx\") pod \"coredns-66bc5c9577-zfwqs\" (UID: \"4bca950a-75ef-497a-bf1a-3d9afc453781\") " pod="kube-system/coredns-66bc5c9577-zfwqs"
	Nov 19 22:42:21 no-preload-546032 kubelet[2107]: I1119 22:42:21.636785    2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.636757843 podStartE2EDuration="14.636757843s" podCreationTimestamp="2025-11-19 22:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:42:21.615635246 +0000 UTC m=+22.567144309" watchObservedRunningTime="2025-11-19 22:42:21.636757843 +0000 UTC m=+22.588266889"
	Nov 19 22:42:23 no-preload-546032 kubelet[2107]: I1119 22:42:23.953795    2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zfwqs" podStartSLOduration=19.953774784 podStartE2EDuration="19.953774784s" podCreationTimestamp="2025-11-19 22:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:42:21.637842949 +0000 UTC m=+22.589352003" watchObservedRunningTime="2025-11-19 22:42:23.953774784 +0000 UTC m=+24.905283830"
	Nov 19 22:42:24 no-preload-546032 kubelet[2107]: I1119 22:42:24.092708    2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq7z2\" (UniqueName: \"kubernetes.io/projected/17fd3c67-5d08-43d8-88c9-cad71f87f288-kube-api-access-pq7z2\") pod \"busybox\" (UID: \"17fd3c67-5d08-43d8-88c9-cad71f87f288\") " pod="default/busybox"
	Nov 19 22:42:27 no-preload-546032 kubelet[2107]: I1119 22:42:27.631260    2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.519130843 podStartE2EDuration="4.631244054s" podCreationTimestamp="2025-11-19 22:42:23 +0000 UTC" firstStartedPulling="2025-11-19 22:42:24.47954608 +0000 UTC m=+25.431055134" lastFinishedPulling="2025-11-19 22:42:26.591659291 +0000 UTC m=+27.543168345" observedRunningTime="2025-11-19 22:42:27.630886471 +0000 UTC m=+28.582395525" watchObservedRunningTime="2025-11-19 22:42:27.631244054 +0000 UTC m=+28.582753099"
	
	
	==> storage-provisioner [3adda05c9140e135ffb5799828979aaf47110911602cd44b2e1fac6b165378f9] <==
	I1119 22:42:20.853696       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:42:20.865523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:20.876191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:42:20.876334       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:42:20.876484       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-546032_d15bd771-27d6-440d-bde6-9bac34d15a3c!
	I1119 22:42:20.877402       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a162edd-3d45-4825-bcaf-625808233e67", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-546032_d15bd771-27d6-440d-bde6-9bac34d15a3c became leader
	W1119 22:42:20.894441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:20.910410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:42:20.981297       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-546032_d15bd771-27d6-440d-bde6-9bac34d15a3c!
	W1119 22:42:22.913370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:22.919344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:24.922506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:24.930102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:26.934566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:26.943053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:28.946415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:28.951534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:30.955995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:30.961537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:32.964808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:32.973763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:34.977564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:34.994471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:36.997928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:42:37.009099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-546032 -n no-preload-546032
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-546032 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (14.94s)

                                                
                                    

Test pass (299/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 41.32
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 38.1
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 169.09
29 TestAddons/serial/Volcano 40.7
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 10.87
35 TestAddons/parallel/Registry 16.54
36 TestAddons/parallel/RegistryCreds 0.78
37 TestAddons/parallel/Ingress 19.77
38 TestAddons/parallel/InspektorGadget 10.98
39 TestAddons/parallel/MetricsServer 5.83
41 TestAddons/parallel/CSI 46.62
42 TestAddons/parallel/Headlamp 17.88
43 TestAddons/parallel/CloudSpanner 6.79
44 TestAddons/parallel/LocalPath 53.75
45 TestAddons/parallel/NvidiaDevicePlugin 5.9
46 TestAddons/parallel/Yakd 10.91
48 TestAddons/StoppedEnableDisable 12.34
49 TestCertOptions 40.31
50 TestCertExpiration 233.66
52 TestForceSystemdFlag 49.19
53 TestForceSystemdEnv 46.7
54 TestDockerEnvContainerd 51.79
58 TestErrorSpam/setup 33.65
59 TestErrorSpam/start 0.79
60 TestErrorSpam/status 1.16
61 TestErrorSpam/pause 1.64
62 TestErrorSpam/unpause 1.87
63 TestErrorSpam/stop 2.24
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 47.42
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 7.18
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.45
75 TestFunctional/serial/CacheCmd/cache/add_local 1.3
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.85
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 42.04
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.45
86 TestFunctional/serial/LogsFileCmd 1.47
87 TestFunctional/serial/InvalidService 4.88
89 TestFunctional/parallel/ConfigCmd 0.47
90 TestFunctional/parallel/DashboardCmd 10.56
91 TestFunctional/parallel/DryRun 0.45
92 TestFunctional/parallel/InternationalLanguage 0.25
93 TestFunctional/parallel/StatusCmd 1.29
97 TestFunctional/parallel/ServiceCmdConnect 7.77
98 TestFunctional/parallel/AddonsCmd 0.21
99 TestFunctional/parallel/PersistentVolumeClaim 22.97
101 TestFunctional/parallel/SSHCmd 0.71
102 TestFunctional/parallel/CpCmd 2.51
104 TestFunctional/parallel/FileSync 0.42
105 TestFunctional/parallel/CertSync 2.23
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
113 TestFunctional/parallel/License 0.28
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.69
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.51
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.29
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
127 TestFunctional/parallel/ServiceCmd/List 0.64
128 TestFunctional/parallel/ProfileCmd/profile_list 0.53
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
131 TestFunctional/parallel/MountCmd/any-port 8.86
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
133 TestFunctional/parallel/ServiceCmd/Format 0.38
134 TestFunctional/parallel/ServiceCmd/URL 0.45
135 TestFunctional/parallel/MountCmd/specific-port 2.36
136 TestFunctional/parallel/MountCmd/VerifyCleanup 1.32
137 TestFunctional/parallel/Version/short 0.08
138 TestFunctional/parallel/Version/components 1.27
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
143 TestFunctional/parallel/ImageCommands/ImageBuild 4.03
144 TestFunctional/parallel/ImageCommands/Setup 0.63
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.4
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.25
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.45
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.47
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.63
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 211.79
163 TestMultiControlPlane/serial/DeployApp 7.6
164 TestMultiControlPlane/serial/PingHostFromPods 1.69
165 TestMultiControlPlane/serial/AddWorkerNode 61.3
166 TestMultiControlPlane/serial/NodeLabels 0.13
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.16
168 TestMultiControlPlane/serial/CopyFile 19.99
169 TestMultiControlPlane/serial/StopSecondaryNode 12.95
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.83
171 TestMultiControlPlane/serial/RestartSecondaryNode 16.55
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.18
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 101.41
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.17
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
176 TestMultiControlPlane/serial/StopCluster 36.59
177 TestMultiControlPlane/serial/RestartCluster 62.17
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
179 TestMultiControlPlane/serial/AddSecondaryNode 48.2
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.14
185 TestJSONOutput/start/Command 83.62
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.75
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.62
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.98
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.26
210 TestKicCustomNetwork/create_custom_network 50.12
211 TestKicCustomNetwork/use_default_bridge_network 37.96
212 TestKicExistingNetwork 37.52
213 TestKicCustomSubnet 37.33
214 TestKicStaticIP 37.77
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 73.78
219 TestMountStart/serial/StartWithMountFirst 9.4
220 TestMountStart/serial/VerifyMountFirst 0.35
221 TestMountStart/serial/StartWithMountSecond 7.16
222 TestMountStart/serial/VerifyMountSecond 0.29
223 TestMountStart/serial/DeleteFirst 1.72
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 7.75
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 108.48
231 TestMultiNode/serial/DeployApp2Nodes 5.22
232 TestMultiNode/serial/PingHostFrom2Pods 1.32
233 TestMultiNode/serial/AddNode 28.7
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.77
236 TestMultiNode/serial/CopyFile 10.44
237 TestMultiNode/serial/StopNode 2.45
238 TestMultiNode/serial/StartAfterStop 7.96
239 TestMultiNode/serial/RestartKeepsNodes 72.97
240 TestMultiNode/serial/DeleteNode 5.67
241 TestMultiNode/serial/StopMultiNode 24.14
242 TestMultiNode/serial/RestartMultiNode 51.52
243 TestMultiNode/serial/ValidateNameConflict 39.31
248 TestPreload 152.09
250 TestScheduledStopUnix 109.55
253 TestInsufficientStorage 13.06
254 TestRunningBinaryUpgrade 66.22
256 TestKubernetesUpgrade 359.78
257 TestMissingContainerUpgrade 152.98
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
260 TestNoKubernetes/serial/StartWithK8s 37.79
261 TestNoKubernetes/serial/StartWithStopK8s 17.47
262 TestNoKubernetes/serial/Start 10.3
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
265 TestNoKubernetes/serial/ProfileList 1.19
266 TestNoKubernetes/serial/Stop 1.39
267 TestNoKubernetes/serial/StartNoArgs 8.71
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.41
269 TestStoppedBinaryUpgrade/Setup 1.43
270 TestStoppedBinaryUpgrade/Upgrade 67.82
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.42
280 TestPause/serial/Start 81.63
281 TestPause/serial/SecondStartNoReconfiguration 7.71
282 TestPause/serial/Pause 0.73
283 TestPause/serial/VerifyStatus 0.33
284 TestPause/serial/Unpause 0.63
285 TestPause/serial/PauseAgain 0.82
286 TestPause/serial/DeletePaused 3.11
287 TestPause/serial/VerifyDeletedResources 0.43
295 TestNetworkPlugins/group/false 5.39
300 TestStartStop/group/old-k8s-version/serial/FirstStart 72.99
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.19
303 TestStartStop/group/old-k8s-version/serial/Stop 12.44
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
305 TestStartStop/group/old-k8s-version/serial/SecondStart 27.38
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 10
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
309 TestStartStop/group/old-k8s-version/serial/Pause 4.09
311 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 87.29
313 TestStartStop/group/embed-certs/serial/FirstStart 88.13
316 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
317 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.2
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
319 TestStartStop/group/embed-certs/serial/Stop 12.15
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
321 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.19
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
323 TestStartStop/group/embed-certs/serial/SecondStart 59.73
324 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
325 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
326 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
327 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.4
328 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/no-preload/serial/FirstStart 76.14
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.14
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.33
333 TestStartStop/group/embed-certs/serial/Pause 4.49
335 TestStartStop/group/newest-cni/serial/FirstStart 45.1
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.34
338 TestStartStop/group/newest-cni/serial/Stop 1.41
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
340 TestStartStop/group/newest-cni/serial/SecondStart 18.08
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
345 TestStartStop/group/newest-cni/serial/Pause 3.37
346 TestNetworkPlugins/group/auto/Start 92.66
347 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.49
348 TestStartStop/group/no-preload/serial/Stop 12.36
349 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
350 TestStartStop/group/no-preload/serial/SecondStart 56.33
351 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
352 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.55
353 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
354 TestStartStop/group/no-preload/serial/Pause 3.14
355 TestNetworkPlugins/group/kindnet/Start 85.36
356 TestNetworkPlugins/group/auto/KubeletFlags 0.38
357 TestNetworkPlugins/group/auto/NetCatPod 10.34
358 TestNetworkPlugins/group/auto/DNS 0.29
359 TestNetworkPlugins/group/auto/Localhost 0.19
360 TestNetworkPlugins/group/auto/HairPin 0.29
361 TestNetworkPlugins/group/calico/Start 62.76
362 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
363 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
364 TestNetworkPlugins/group/kindnet/NetCatPod 10.42
365 TestNetworkPlugins/group/calico/ControllerPod 6.01
366 TestNetworkPlugins/group/kindnet/DNS 0.18
367 TestNetworkPlugins/group/kindnet/Localhost 0.15
368 TestNetworkPlugins/group/kindnet/HairPin 0.16
369 TestNetworkPlugins/group/calico/KubeletFlags 0.32
370 TestNetworkPlugins/group/calico/NetCatPod 10.46
371 TestNetworkPlugins/group/calico/DNS 0.28
372 TestNetworkPlugins/group/calico/Localhost 0.2
373 TestNetworkPlugins/group/calico/HairPin 0.25
374 TestNetworkPlugins/group/custom-flannel/Start 64.47
375 TestNetworkPlugins/group/enable-default-cni/Start 81.51
376 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
377 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
378 TestNetworkPlugins/group/custom-flannel/DNS 0.17
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
381 TestNetworkPlugins/group/flannel/Start 69.3
382 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
383 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.4
384 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
385 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
387 TestNetworkPlugins/group/bridge/Start 85.09
388 TestNetworkPlugins/group/flannel/ControllerPod 6.01
389 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
390 TestNetworkPlugins/group/flannel/NetCatPod 10.33
391 TestNetworkPlugins/group/flannel/DNS 0.19
392 TestNetworkPlugins/group/flannel/Localhost 0.17
393 TestNetworkPlugins/group/flannel/HairPin 0.16
394 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
395 TestNetworkPlugins/group/bridge/NetCatPod 9.29
396 TestNetworkPlugins/group/bridge/DNS 0.16
397 TestNetworkPlugins/group/bridge/Localhost 0.14
398 TestNetworkPlugins/group/bridge/HairPin 0.19
x
+
TestDownloadOnly/v1.28.0/json-events (41.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-514856 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-514856 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (41.319494585s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (41.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1119 21:48:18.499628    4144 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1119 21:48:18.499706    4144 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-514856
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-514856: exit status 85 (87.326584ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-514856 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-514856 │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 21:47:37
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 21:47:37.224722    4149 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:47:37.224832    4149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:47:37.224843    4149 out.go:374] Setting ErrFile to fd 2...
	I1119 21:47:37.224847    4149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:47:37.225140    4149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
	W1119 21:47:37.225276    4149 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21918-2347/.minikube/config/config.json: open /home/jenkins/minikube-integration/21918-2347/.minikube/config/config.json: no such file or directory
	I1119 21:47:37.225694    4149 out.go:368] Setting JSON to true
	I1119 21:47:37.226488    4149 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1778,"bootTime":1763587079,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1119 21:47:37.226556    4149 start.go:143] virtualization:  
	I1119 21:47:37.230623    4149 out.go:99] [download-only-514856] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1119 21:47:37.230886    4149 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball: no such file or directory
	I1119 21:47:37.230948    4149 notify.go:221] Checking for updates...
	I1119 21:47:37.233828    4149 out.go:171] MINIKUBE_LOCATION=21918
	I1119 21:47:37.237087    4149 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:47:37.240002    4149 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 21:47:37.242997    4149 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube
	I1119 21:47:37.245880    4149 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1119 21:47:37.251717    4149 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1119 21:47:37.252007    4149 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:47:37.273381    4149 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 21:47:37.273479    4149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:47:37.682264    4149 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-19 21:47:37.673038205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 21:47:37.682376    4149 docker.go:319] overlay module found
	I1119 21:47:37.685529    4149 out.go:99] Using the docker driver based on user configuration
	I1119 21:47:37.685568    4149 start.go:309] selected driver: docker
	I1119 21:47:37.685576    4149 start.go:930] validating driver "docker" against <nil>
	I1119 21:47:37.685690    4149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:47:37.743008    4149 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-19 21:47:37.734572308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 21:47:37.743162    4149 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 21:47:37.743429    4149 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1119 21:47:37.743622    4149 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1119 21:47:37.746620    4149 out.go:171] Using Docker driver with root privileges
	I1119 21:47:37.749445    4149 cni.go:84] Creating CNI manager for ""
	I1119 21:47:37.749505    4149 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 21:47:37.749518    4149 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 21:47:37.749597    4149 start.go:353] cluster config:
	{Name:download-only-514856 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-514856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:47:37.752564    4149 out.go:99] Starting "download-only-514856" primary control-plane node in "download-only-514856" cluster
	I1119 21:47:37.752584    4149 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 21:47:37.755492    4149 out.go:99] Pulling base image v0.0.48-1763561786-21918 ...
	I1119 21:47:37.755551    4149 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1119 21:47:37.755715    4149 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 21:47:37.771217    4149 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 to local cache
	I1119 21:47:37.771402    4149 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local cache directory
	I1119 21:47:37.771498    4149 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 to local cache
	I1119 21:47:37.829163    4149 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1119 21:47:37.829187    4149 cache.go:65] Caching tarball of preloaded images
	I1119 21:47:37.829337    4149 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1119 21:47:37.832598    4149 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1119 21:47:37.832625    4149 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1119 21:47:37.922094    4149 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1119 21:47:37.922228    4149 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1119 21:47:43.045843    4149 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 as a tarball
	
	
	* The control-plane node download-only-514856 host does not exist
	  To start a cluster, run: "minikube start -p download-only-514856"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-514856
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (38.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-434450 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-434450 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (38.099284609s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (38.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1119 21:48:57.036035    4144 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1119 21:48:57.036066    4144 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-434450
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-434450: exit status 85 (89.658564ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-514856 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-514856 │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │ 19 Nov 25 21:48 UTC │
	│ delete  │ -p download-only-514856                                                                                                                                                               │ download-only-514856 │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │ 19 Nov 25 21:48 UTC │
	│ start   │ -o=json --download-only -p download-only-434450 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-434450 │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 21:48:18
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 21:48:18.974924    4352 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:48:18.975082    4352 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:48:18.975109    4352 out.go:374] Setting ErrFile to fd 2...
	I1119 21:48:18.975128    4352 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:48:18.975511    4352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
	I1119 21:48:18.976021    4352 out.go:368] Setting JSON to true
	I1119 21:48:18.977052    4352 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1820,"bootTime":1763587079,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1119 21:48:18.977154    4352 start.go:143] virtualization:  
	I1119 21:48:18.980763    4352 out.go:99] [download-only-434450] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 21:48:18.981045    4352 notify.go:221] Checking for updates...
	I1119 21:48:18.983968    4352 out.go:171] MINIKUBE_LOCATION=21918
	I1119 21:48:18.986917    4352 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:48:18.990051    4352 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 21:48:18.992952    4352 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube
	I1119 21:48:18.996080    4352 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1119 21:48:19.003092    4352 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1119 21:48:19.003474    4352 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:48:19.027773    4352 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 21:48:19.027886    4352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:48:19.090949    4352 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-19 21:48:19.082075543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 21:48:19.091059    4352 docker.go:319] overlay module found
	I1119 21:48:19.094275    4352 out.go:99] Using the docker driver based on user configuration
	I1119 21:48:19.094323    4352 start.go:309] selected driver: docker
	I1119 21:48:19.094331    4352 start.go:930] validating driver "docker" against <nil>
	I1119 21:48:19.094447    4352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:48:19.158916    4352 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-19 21:48:19.149276153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 21:48:19.159075    4352 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 21:48:19.159380    4352 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1119 21:48:19.159540    4352 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1119 21:48:19.162656    4352 out.go:171] Using Docker driver with root privileges
	I1119 21:48:19.165392    4352 cni.go:84] Creating CNI manager for ""
	I1119 21:48:19.165460    4352 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 21:48:19.165476    4352 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 21:48:19.165551    4352 start.go:353] cluster config:
	{Name:download-only-434450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-434450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:48:19.168601    4352 out.go:99] Starting "download-only-434450" primary control-plane node in "download-only-434450" cluster
	I1119 21:48:19.168631    4352 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 21:48:19.171544    4352 out.go:99] Pulling base image v0.0.48-1763561786-21918 ...
	I1119 21:48:19.171602    4352 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 21:48:19.171766    4352 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 21:48:19.187346    4352 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 to local cache
	I1119 21:48:19.187525    4352 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local cache directory
	I1119 21:48:19.187547    4352 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local cache directory, skipping pull
	I1119 21:48:19.187552    4352 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in cache, skipping pull
	I1119 21:48:19.187561    4352 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 as a tarball
	I1119 21:48:19.233204    4352 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1119 21:48:19.233245    4352 cache.go:65] Caching tarball of preloaded images
	I1119 21:48:19.233412    4352 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 21:48:19.236416    4352 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1119 21:48:19.236443    4352 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1119 21:48:19.324546    4352 preload.go:295] Got checksum from GCS API "435977642a202d20ca04f26d87d875a8"
	I1119 21:48:19.324599    4352 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:435977642a202d20ca04f26d87d875a8 -> /home/jenkins/minikube-integration/21918-2347/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-434450 host does not exist
	  To start a cluster, run: "minikube start -p download-only-434450"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-434450
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1119 21:48:58.206791    4144 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-318008 --alsologtostderr --binary-mirror http://127.0.0.1:42763 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-318008" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-318008
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-030214
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-030214: exit status 85 (64.991115ms)

                                                
                                                
-- stdout --
	* Profile "addons-030214" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-030214"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-030214
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-030214: exit status 85 (59.329673ms)

                                                
                                                
-- stdout --
	* Profile "addons-030214" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-030214"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (169.09s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-030214 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-030214 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m49.0858781s)
--- PASS: TestAddons/Setup (169.09s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.7s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 70.244975ms
addons_test.go:868: volcano-scheduler stabilized in 70.995355ms
addons_test.go:876: volcano-admission stabilized in 71.033083ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-tjbzm" [97904cd6-80f2-41ec-bb10-e60a5edbcc7c] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003047681s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-f59nw" [f8559ce2-a952-480b-958d-6b82c18f3fbd] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003241603s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-vcxps" [c3295dc6-df43-477e-a465-985b737b908f] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003987204s
addons_test.go:903: (dbg) Run:  kubectl --context addons-030214 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-030214 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-030214 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [53ad5d39-3b53-41eb-beaf-a2cd5e92e469] Pending
helpers_test.go:352: "test-job-nginx-0" [53ad5d39-3b53-41eb-beaf-a2cd5e92e469] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [53ad5d39-3b53-41eb-beaf-a2cd5e92e469] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.015253571s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-030214 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-030214 addons disable volcano --alsologtostderr -v=1: (12.033243085s)
--- PASS: TestAddons/serial/Volcano (40.70s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-030214 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-030214 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.87s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-030214 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-030214 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5953e3ff-9584-4d02-8233-19a355201a20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5953e3ff-9584-4d02-8233-19a355201a20] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003366818s
addons_test.go:694: (dbg) Run:  kubectl --context addons-030214 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-030214 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-030214 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-030214 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.87s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.185214ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-fm98q" [7891f1c8-ec9e-4854-948e-5a68f4dfa8e7] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003420045s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-lf7st" [307677f6-e2b6-4d88-874a-ede0ca50ab40] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003731692s
addons_test.go:392: (dbg) Run:  kubectl --context addons-030214 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-030214 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-030214 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.388645394s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-030214 ip
2025/11/19 21:53:04 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-030214 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.54s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.656577ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-030214
addons_test.go:332: (dbg) Run:  kubectl --context addons-030214 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-030214 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-030214 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-030214 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-030214 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [b1aeba39-b032-40b0-9c2b-73cf6231d371] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [b1aeba39-b032-40b0-9c2b-73cf6231d371] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003629462s
I1119 21:54:25.613832    4144 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-030214 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-030214 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-030214 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-030214 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-030214 addons disable ingress-dns --alsologtostderr -v=1: (1.29335145s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-030214 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-030214 addons disable ingress --alsologtostderr -v=1: (7.824424472s)
--- PASS: TestAddons/parallel/Ingress (19.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-pj4wh" [39b8217b-1065-48b5-bf81-c1f89be0c492] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004857749s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-030214 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-030214 addons disable inspektor-gadget --alsologtostderr -v=1: (5.976170387s)
--- PASS: TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.63728ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-4v62f" [3945af55-1d29-4ae7-ba50-6a67cb889544] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003987544s
addons_test.go:463: (dbg) Run:  kubectl --context addons-030214 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-030214 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1119 21:53:30.395283    4144 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1119 21:53:30.400099    4144 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1119 21:53:30.400130    4144 kapi.go:107] duration metric: took 7.947262ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.956649ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-030214 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-030214 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [659e47bb-79aa-49ea-9ceb-669465e958a4] Pending
helpers_test.go:352: "task-pv-pod" [659e47bb-79aa-49ea-9ceb-669465e958a4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [659e47bb-79aa-49ea-9ceb-669465e958a4] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003404959s
addons_test.go:572: (dbg) Run:  kubectl --context addons-030214 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-030214 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-030214 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-030214 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-030214 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-030214 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-030214 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [0189c88b-713b-4800-93b4-01aa531b990d] Pending
helpers_test.go:352: "task-pv-pod-restore" [0189c88b-713b-4800-93b4-01aa531b990d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [0189c88b-713b-4800-93b4-01aa531b990d] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003726147s
addons_test.go:614: (dbg) Run:  kubectl --context addons-030214 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-030214 delete pod task-pv-pod-restore: (1.09398434s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-030214 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-030214 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-030214 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-030214 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-030214 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.126741674s)
--- PASS: TestAddons/parallel/CSI (46.62s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-030214 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-030214 --alsologtostderr -v=1: (1.038538128s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-dxsb2" [e25d5f6d-29e6-453b-90e8-7ccae6115549] Pending
helpers_test.go:352: "headlamp-6945c6f4d-dxsb2" [e25d5f6d-29e6-453b-90e8-7ccae6115549] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-dxsb2" [e25d5f6d-29e6-453b-90e8-7ccae6115549] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003089363s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-030214 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-030214 addons disable headlamp --alsologtostderr -v=1: (5.8349164s)
--- PASS: TestAddons/parallel/Headlamp (17.88s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.79s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-kfkzh" [055bbe73-4ce4-479a-a5f7-cda7661b4bef] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.012355581s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-030214 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.79s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.75s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-030214 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-030214 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-030214 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [8f09a4e7-958f-4e9a-baa3-20a8af7ec8ce] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [8f09a4e7-958f-4e9a-baa3-20a8af7ec8ce] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [8f09a4e7-958f-4e9a-baa3-20a8af7ec8ce] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003743694s
addons_test.go:967: (dbg) Run:  kubectl --context addons-030214 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-030214 ssh "cat /opt/local-path-provisioner/pvc-7ee4fb6b-d786-425e-b874-78b18b3517b1_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-030214 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-030214 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-030214 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-030214 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.20474262s)
--- PASS: TestAddons/parallel/LocalPath (53.75s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.9s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-87q6d" [90db11f9-09d7-4f83-b3e0-27e747ec1807] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004545186s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-030214 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.90s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-hbd4r" [e54f3443-2532-49e7-b203-eba5b5e3a65a] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003544399s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-030214 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-030214 addons disable yakd --alsologtostderr -v=1: (5.909449485s)
--- PASS: TestAddons/parallel/Yakd (10.91s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-030214
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-030214: (12.056217529s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-030214
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-030214
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-030214
--- PASS: TestAddons/StoppedEnableDisable (12.34s)

                                                
                                    
x
+
TestCertOptions (40.31s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-815306 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-815306 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (37.330346346s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-815306 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-815306 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-815306 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-815306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-815306
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-815306: (2.219658634s)
--- PASS: TestCertOptions (40.31s)

                                                
                                    
x
+
TestCertExpiration (233.66s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-750367 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-750367 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (39.422940376s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-750367 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-750367 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (9.982764643s)
helpers_test.go:175: Cleaning up "cert-expiration-750367" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-750367
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-750367: (4.252696021s)
--- PASS: TestCertExpiration (233.66s)

                                                
                                    
x
+
TestForceSystemdFlag (49.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-759819 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1119 22:33:25.485211    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-759819 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (45.436897954s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-759819 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-759819" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-759819
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-759819: (3.273094107s)
--- PASS: TestForceSystemdFlag (49.19s)

                                                
                                    
x
+
TestForceSystemdEnv (46.7s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-388402 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-388402 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (43.244078805s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-388402 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-388402" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-388402
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-388402: (2.978628985s)
--- PASS: TestForceSystemdEnv (46.70s)

                                                
                                    
x
+
TestDockerEnvContainerd (51.79s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-967888 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-967888 --driver=docker  --container-runtime=containerd: (34.970593391s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-967888"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-967888": (1.120918086s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-cQPPWHF9GbSQ/agent.24092" SSH_AGENT_PID="24093" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-cQPPWHF9GbSQ/agent.24092" SSH_AGENT_PID="24093" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-cQPPWHF9GbSQ/agent.24092" SSH_AGENT_PID="24093" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.455908363s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-cQPPWHF9GbSQ/agent.24092" SSH_AGENT_PID="24093" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
docker_test.go:250: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-cQPPWHF9GbSQ/agent.24092" SSH_AGENT_PID="24093" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls": (1.076856646s)
helpers_test.go:175: Cleaning up "dockerenv-967888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-967888
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-967888: (2.626271281s)
--- PASS: TestDockerEnvContainerd (51.79s)

                                                
                                    
x
+
TestErrorSpam/setup (33.65s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-743802 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-743802 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-743802 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-743802 --driver=docker  --container-runtime=containerd: (33.648522602s)
--- PASS: TestErrorSpam/setup (33.65s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743802 --log_dir /tmp/nospam-743802 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743802 --log_dir /tmp/nospam-743802 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743802 --log_dir /tmp/nospam-743802 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743802 --log_dir /tmp/nospam-743802 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743802 --log_dir /tmp/nospam-743802 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743802 --log_dir /tmp/nospam-743802 status
--- PASS: TestErrorSpam/status (1.16s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743802 --log_dir /tmp/nospam-743802 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743802 --log_dir /tmp/nospam-743802 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743802 --log_dir /tmp/nospam-743802 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743802 --log_dir /tmp/nospam-743802 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743802 --log_dir /tmp/nospam-743802 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743802 --log_dir /tmp/nospam-743802 unpause
--- PASS: TestErrorSpam/unpause (1.87s)

                                                
                                    
x
+
TestErrorSpam/stop (2.24s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743802 --log_dir /tmp/nospam-743802 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-743802 --log_dir /tmp/nospam-743802 stop: (2.042157111s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743802 --log_dir /tmp/nospam-743802 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743802 --log_dir /tmp/nospam-743802 stop
--- PASS: TestErrorSpam/stop (2.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21918-2347/.minikube/files/etc/test/nested/copy/4144/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.42s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-183559 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1119 21:56:48.010094    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:56:48.017648    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:56:48.029012    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:56:48.050449    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:56:48.091907    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:56:48.173356    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:56:48.334932    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:56:48.656617    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:56:49.298629    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:56:50.579942    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:56:53.142250    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:56:58.264319    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:57:08.506365    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-183559 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (47.419103563s)
--- PASS: TestFunctional/serial/StartWithProxy (47.42s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.18s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1119 21:57:18.916665    4144 config.go:182] Loaded profile config "functional-183559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-183559 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-183559 --alsologtostderr -v=8: (7.174282211s)
functional_test.go:678: soft start took 7.175666711s for "functional-183559" cluster.
I1119 21:57:26.091414    4144 config.go:182] Loaded profile config "functional-183559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (7.18s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-183559 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-183559 cache add registry.k8s.io/pause:3.1: (1.301380141s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-183559 cache add registry.k8s.io/pause:3.3: (1.106925903s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 cache add registry.k8s.io/pause:latest
E1119 21:57:28.988453    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-183559 cache add registry.k8s.io/pause:latest: (1.039086078s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-183559 /tmp/TestFunctionalserialCacheCmdcacheadd_local3412952778/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 cache add minikube-local-cache-test:functional-183559
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 cache delete minikube-local-cache-test:functional-183559
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-183559
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-183559 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.042466ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 kubectl -- --context functional-183559 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-183559 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-183559 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1119 21:58:09.950284    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-183559 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.041333769s)
functional_test.go:776: restart took 42.041436244s for "functional-183559" cluster.
I1119 21:58:15.703170    4144 config.go:182] Loaded profile config "functional-183559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (42.04s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-183559 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-183559 logs: (1.448927947s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 logs --file /tmp/TestFunctionalserialLogsFileCmd190032522/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-183559 logs --file /tmp/TestFunctionalserialLogsFileCmd190032522/001/logs.txt: (1.469053303s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.88s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-183559 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-183559
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-183559: exit status 115 (443.822959ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31898 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-183559 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-183559 delete -f testdata/invalidsvc.yaml: (1.173944069s)
--- PASS: TestFunctional/serial/InvalidService (4.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-183559 config get cpus: exit status 14 (90.479069ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-183559 config get cpus: exit status 14 (69.636401ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-183559 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-183559 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 39542: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.56s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-183559 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-183559 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (200.303177ms)

                                                
                                                
-- stdout --
	* [functional-183559] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:58:53.654117   39291 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:58:53.654328   39291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:58:53.654357   39291 out.go:374] Setting ErrFile to fd 2...
	I1119 21:58:53.654377   39291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:58:53.654672   39291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
	I1119 21:58:53.655099   39291 out.go:368] Setting JSON to false
	I1119 21:58:53.656132   39291 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2454,"bootTime":1763587079,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1119 21:58:53.656228   39291 start.go:143] virtualization:  
	I1119 21:58:53.659446   39291 out.go:179] * [functional-183559] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 21:58:53.663153   39291 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 21:58:53.663352   39291 notify.go:221] Checking for updates...
	I1119 21:58:53.669066   39291 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:58:53.671821   39291 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 21:58:53.674665   39291 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube
	I1119 21:58:53.677674   39291 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 21:58:53.680511   39291 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 21:58:53.688707   39291 config.go:182] Loaded profile config "functional-183559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 21:58:53.689256   39291 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:58:53.715129   39291 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 21:58:53.715238   39291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:58:53.777893   39291 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 21:58:53.76767533 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 21:58:53.778001   39291 docker.go:319] overlay module found
	I1119 21:58:53.781277   39291 out.go:179] * Using the docker driver based on existing profile
	I1119 21:58:53.784166   39291 start.go:309] selected driver: docker
	I1119 21:58:53.784187   39291 start.go:930] validating driver "docker" against &{Name:functional-183559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-183559 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:58:53.784315   39291 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 21:58:53.787847   39291 out.go:203] 
	W1119 21:58:53.790594   39291 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1119 21:58:53.793344   39291 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-183559 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-183559 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-183559 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (251.486429ms)

                                                
                                                
-- stdout --
	* [functional-183559] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:58:53.399779   39178 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:58:53.399990   39178 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:58:53.400016   39178 out.go:374] Setting ErrFile to fd 2...
	I1119 21:58:53.400034   39178 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:58:53.400452   39178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
	I1119 21:58:53.400885   39178 out.go:368] Setting JSON to false
	I1119 21:58:53.401870   39178 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2454,"bootTime":1763587079,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1119 21:58:53.401965   39178 start.go:143] virtualization:  
	I1119 21:58:53.406016   39178 out.go:179] * [functional-183559] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1119 21:58:53.409268   39178 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 21:58:53.409337   39178 notify.go:221] Checking for updates...
	I1119 21:58:53.419778   39178 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:58:53.422838   39178 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 21:58:53.427823   39178 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube
	I1119 21:58:53.430919   39178 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 21:58:53.433816   39178 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 21:58:53.437230   39178 config.go:182] Loaded profile config "functional-183559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 21:58:53.437809   39178 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:58:53.470858   39178 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 21:58:53.470973   39178 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:58:53.576164   39178 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 21:58:53.566077074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 21:58:53.576295   39178 docker.go:319] overlay module found
	I1119 21:58:53.579851   39178 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1119 21:58:53.582770   39178 start.go:309] selected driver: docker
	I1119 21:58:53.582793   39178 start.go:930] validating driver "docker" against &{Name:functional-183559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-183559 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:58:53.582913   39178 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 21:58:53.586554   39178 out.go:203] 
	W1119 21:58:53.589530   39178 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1119 21:58:53.592560   39178 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-183559 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-183559 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-bthnj" [b60f88dc-8420-4c66-9d6f-8d1c516af7ce] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-bthnj" [b60f88dc-8420-4c66-9d6f-8d1c516af7ce] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.012057741s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31166
functional_test.go:1680: http://192.168.49.2:31166: success! body:
Request served by hello-node-connect-7d85dfc575-bthnj

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31166
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.77s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (22.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [88825dd1-724a-4a69-958f-82506277d46b] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007917982s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-183559 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-183559 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-183559 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-183559 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b77a7c8c-b0e9-4e8a-ad56-8a42ad4da3e8] Pending
helpers_test.go:352: "sp-pod" [b77a7c8c-b0e9-4e8a-ad56-8a42ad4da3e8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [b77a7c8c-b0e9-4e8a-ad56-8a42ad4da3e8] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003417498s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-183559 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-183559 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-183559 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [992b63e4-3fb7-4594-8c82-42a2b9b41255] Pending
helpers_test.go:352: "sp-pod" [992b63e4-3fb7-4594-8c82-42a2b9b41255] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [992b63e4-3fb7-4594-8c82-42a2b9b41255] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004344839s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-183559 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (22.97s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh -n functional-183559 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 cp functional-183559:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3635821644/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh -n functional-183559 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh -n functional-183559 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4144/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh "sudo cat /etc/test/nested/copy/4144/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4144.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh "sudo cat /etc/ssl/certs/4144.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4144.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh "sudo cat /usr/share/ca-certificates/4144.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41442.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh "sudo cat /etc/ssl/certs/41442.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41442.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh "sudo cat /usr/share/ca-certificates/41442.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-183559 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-183559 ssh "sudo systemctl is-active docker": exit status 1 (294.806644ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-183559 ssh "sudo systemctl is-active crio": exit status 1 (293.283271ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-183559 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-183559 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-183559 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-183559 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 36616: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-183559 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-183559 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [96005029-f92f-40ea-a9ad-27cb8fa3f5e7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [96005029-f92f-40ea-a9ad-27cb8fa3f5e7] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003570106s
I1119 21:58:33.992371    4144 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-183559 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.91.184 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-183559 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-183559 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-183559 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-8znsh" [89ffbd17-472a-4215-9584-330cdffe5a7b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-8znsh" [89ffbd17-472a-4215-9584-330cdffe5a7b] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.010385851s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "468.232014ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "62.572224ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "420.717605ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "74.403946ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 service list -o json
functional_test.go:1504: Took "589.059632ms" to run "out/minikube-linux-arm64 -p functional-183559 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-183559 /tmp/TestFunctionalparallelMountCmdany-port3534988637/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763589530552508877" to /tmp/TestFunctionalparallelMountCmdany-port3534988637/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763589530552508877" to /tmp/TestFunctionalparallelMountCmdany-port3534988637/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763589530552508877" to /tmp/TestFunctionalparallelMountCmdany-port3534988637/001/test-1763589530552508877
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-183559 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (428.255135ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1119 21:58:50.982291    4144 retry.go:31] will retry after 748.388738ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 19 21:58 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 19 21:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 19 21:58 test-1763589530552508877
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh cat /mount-9p/test-1763589530552508877
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-183559 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [974f6bf2-2fde-4b5c-81a3-ecd312d9ef3d] Pending
helpers_test.go:352: "busybox-mount" [974f6bf2-2fde-4b5c-81a3-ecd312d9ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [974f6bf2-2fde-4b5c-81a3-ecd312d9ef3d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [974f6bf2-2fde-4b5c-81a3-ecd312d9ef3d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005766955s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-183559 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-183559 /tmp/TestFunctionalparallelMountCmdany-port3534988637/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32723
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32723
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-183559 /tmp/TestFunctionalparallelMountCmdspecific-port4172066073/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-183559 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (491.146101ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1119 21:58:59.902406    4144 retry.go:31] will retry after 608.127019ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-183559 /tmp/TestFunctionalparallelMountCmdspecific-port4172066073/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-183559 ssh "sudo umount -f /mount-9p": exit status 1 (393.475022ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-183559 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-183559 /tmp/TestFunctionalparallelMountCmdspecific-port4172066073/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-183559 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3367612859/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-183559 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3367612859/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-183559 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3367612859/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-183559 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-183559 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3367612859/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-183559 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3367612859/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-183559 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3367612859/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-183559 version -o=json --components: (1.270547677s)
--- PASS: TestFunctional/parallel/Version/components (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-183559 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-183559
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-183559
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-183559 image ls --format short --alsologtostderr:
I1119 21:59:10.805594   42385 out.go:360] Setting OutFile to fd 1 ...
I1119 21:59:10.805774   42385 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:59:10.805794   42385 out.go:374] Setting ErrFile to fd 2...
I1119 21:59:10.805813   42385 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:59:10.806749   42385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
I1119 21:59:10.807487   42385 config.go:182] Loaded profile config "functional-183559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 21:59:10.807662   42385 config.go:182] Loaded profile config "functional-183559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 21:59:10.808241   42385 cli_runner.go:164] Run: docker container inspect functional-183559 --format={{.State.Status}}
I1119 21:59:10.832957   42385 ssh_runner.go:195] Run: systemctl --version
I1119 21:59:10.833005   42385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-183559
I1119 21:59:10.861686   42385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/functional-183559/id_rsa Username:docker}
I1119 21:59:10.978676   42385 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-183559 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/minikube-local-cache-test │ functional-183559  │ sha256:dfd88f │ 990B   │
│ docker.io/library/nginx                     │ alpine             │ sha256:cbad63 │ 23.1MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:7eb2c6 │ 20.7MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:a18947 │ 98.2MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:43911e │ 24.6MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:05baa9 │ 22.8MB │
│ docker.io/library/nginx                     │ latest             │ sha256:bb747c │ 58.3MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:b5f57e │ 15.8MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ docker.io/kicbase/echo-server               │ functional-183559  │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-183559 image ls --format table --alsologtostderr:
I1119 21:59:11.102669   42464 out.go:360] Setting OutFile to fd 1 ...
I1119 21:59:11.102900   42464 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:59:11.102927   42464 out.go:374] Setting ErrFile to fd 2...
I1119 21:59:11.102946   42464 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:59:11.103250   42464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
I1119 21:59:11.103958   42464 config.go:182] Loaded profile config "functional-183559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 21:59:11.104138   42464 config.go:182] Loaded profile config "functional-183559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 21:59:11.104641   42464 cli_runner.go:164] Run: docker container inspect functional-183559 --format={{.State.Status}}
I1119 21:59:11.130047   42464 ssh_runner.go:195] Run: systemctl --version
I1119 21:59:11.130096   42464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-183559
I1119 21:59:11.150281   42464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/functional-183559/id_rsa Username:docker}
I1119 21:59:11.255602   42464 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-183559 image ls --format json --alsologtostderr:
[{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"98207481"},{"id":"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"24571109"},{"id":"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e8185
2a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"22788047"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-183559"],"size":"2173567"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91f
df992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"20720058"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"15779817"},{"id":"sha256:d7b
100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:dfd88fdd9aac9a3506e0d4dcf1b8d301cc6cebc8d269a66b1f753ee7531e8c14","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-183559"],"size":"990"},{"id":"sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"23117513"},{"id":"sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10a
b183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"58263548"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-183559 image ls --format json --alsologtostderr:
I1119 21:59:11.058715   42457 out.go:360] Setting OutFile to fd 1 ...
I1119 21:59:11.058985   42457 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:59:11.059000   42457 out.go:374] Setting ErrFile to fd 2...
I1119 21:59:11.059006   42457 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:59:11.059304   42457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
I1119 21:59:11.060055   42457 config.go:182] Loaded profile config "functional-183559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 21:59:11.060165   42457 config.go:182] Loaded profile config "functional-183559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 21:59:11.060642   42457 cli_runner.go:164] Run: docker container inspect functional-183559 --format={{.State.Status}}
I1119 21:59:11.085816   42457 ssh_runner.go:195] Run: systemctl --version
I1119 21:59:11.085876   42457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-183559
I1119 21:59:11.109914   42457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/functional-183559/id_rsa Username:docker}
I1119 21:59:11.222241   42457 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-183559 image ls --format yaml --alsologtostderr:
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-183559
size: "2173567"
- id: sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "58263548"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "23117513"
- id: sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "98207481"
- id: sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "20720058"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:dfd88fdd9aac9a3506e0d4dcf1b8d301cc6cebc8d269a66b1f753ee7531e8c14
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-183559
size: "990"
- id: sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "24571109"
- id: sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "22788047"
- id: sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "15779817"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-183559 image ls --format yaml --alsologtostderr:
I1119 21:59:10.779326   42386 out.go:360] Setting OutFile to fd 1 ...
I1119 21:59:10.779533   42386 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:59:10.779548   42386 out.go:374] Setting ErrFile to fd 2...
I1119 21:59:10.779554   42386 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:59:10.779828   42386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
I1119 21:59:10.780474   42386 config.go:182] Loaded profile config "functional-183559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 21:59:10.780583   42386 config.go:182] Loaded profile config "functional-183559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 21:59:10.781072   42386 cli_runner.go:164] Run: docker container inspect functional-183559 --format={{.State.Status}}
I1119 21:59:10.811880   42386 ssh_runner.go:195] Run: systemctl --version
I1119 21:59:10.811940   42386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-183559
I1119 21:59:10.842859   42386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/functional-183559/id_rsa Username:docker}
I1119 21:59:10.949734   42386 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-183559 ssh pgrep buildkitd: exit status 1 (271.93863ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 image build -t localhost/my-image:functional-183559 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-183559 image build -t localhost/my-image:functional-183559 testdata/build --alsologtostderr: (3.534171106s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-183559 image build -t localhost/my-image:functional-183559 testdata/build --alsologtostderr:
I1119 21:59:11.579166   42596 out.go:360] Setting OutFile to fd 1 ...
I1119 21:59:11.579381   42596 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:59:11.579395   42596 out.go:374] Setting ErrFile to fd 2...
I1119 21:59:11.579401   42596 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:59:11.579767   42596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
I1119 21:59:11.580417   42596 config.go:182] Loaded profile config "functional-183559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 21:59:11.583061   42596 config.go:182] Loaded profile config "functional-183559": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 21:59:11.583595   42596 cli_runner.go:164] Run: docker container inspect functional-183559 --format={{.State.Status}}
I1119 21:59:11.601999   42596 ssh_runner.go:195] Run: systemctl --version
I1119 21:59:11.602050   42596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-183559
I1119 21:59:11.623666   42596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/functional-183559/id_rsa Username:docker}
I1119 21:59:11.728772   42596 build_images.go:162] Building image from path: /tmp/build.4175449389.tar
I1119 21:59:11.728849   42596 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1119 21:59:11.736596   42596 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4175449389.tar
I1119 21:59:11.740236   42596 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4175449389.tar: stat -c "%s %y" /var/lib/minikube/build/build.4175449389.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4175449389.tar': No such file or directory
I1119 21:59:11.740266   42596 ssh_runner.go:362] scp /tmp/build.4175449389.tar --> /var/lib/minikube/build/build.4175449389.tar (3072 bytes)
I1119 21:59:11.758399   42596 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4175449389
I1119 21:59:11.766562   42596 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4175449389 -xf /var/lib/minikube/build/build.4175449389.tar
I1119 21:59:11.774990   42596 containerd.go:394] Building image: /var/lib/minikube/build/build.4175449389
I1119 21:59:11.775074   42596 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4175449389 --local dockerfile=/var/lib/minikube/build/build.4175449389 --output type=image,name=localhost/my-image:functional-183559
#1 [internal] load build definition from Dockerfile
#1 DONE 0.0s

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:bab5f410c98fc0b3da23b80b0cda71b51c85f8bc47d0e3f15316947f9c756244 0.0s done
#8 exporting config sha256:d39ce779eb7ac8ecdae92c2bc52dbc1b6cfbda9bdce1325e4c3917bc6be181a8 0.0s done
#8 naming to localhost/my-image:functional-183559 done
#8 DONE 0.2s
I1119 21:59:15.018280   42596 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4175449389 --local dockerfile=/var/lib/minikube/build/build.4175449389 --output type=image,name=localhost/my-image:functional-183559: (3.243167782s)
I1119 21:59:15.018355   42596 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4175449389
I1119 21:59:15.031143   42596 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4175449389.tar
I1119 21:59:15.051362   42596 build_images.go:218] Built localhost/my-image:functional-183559 from /tmp/build.4175449389.tar
I1119 21:59:15.051393   42596 build_images.go:134] succeeded building to: functional-183559
I1119 21:59:15.051459   42596 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
2025/11/19 21:59:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-183559
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 image load --daemon kicbase/echo-server:functional-183559 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-183559 image load --daemon kicbase/echo-server:functional-183559 --alsologtostderr: (1.121259377s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 image load --daemon kicbase/echo-server:functional-183559 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-183559
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 image load --daemon kicbase/echo-server:functional-183559 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 image save kicbase/echo-server:functional-183559 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 image rm kicbase/echo-server:functional-183559 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-183559
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-183559 image save --daemon kicbase/echo-server:functional-183559 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-183559
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-183559
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-183559
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-183559
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (211.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1119 21:59:31.871706    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:01:48.008073    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:02:15.713780    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-595717 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (3m30.9194778s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (211.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-595717 kubectl -- rollout status deployment/busybox: (4.565791251s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 kubectl -- exec busybox-7b57f96db7-4gfxm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 kubectl -- exec busybox-7b57f96db7-npxd2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 kubectl -- exec busybox-7b57f96db7-txqg2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 kubectl -- exec busybox-7b57f96db7-4gfxm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 kubectl -- exec busybox-7b57f96db7-npxd2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 kubectl -- exec busybox-7b57f96db7-txqg2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 kubectl -- exec busybox-7b57f96db7-4gfxm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 kubectl -- exec busybox-7b57f96db7-npxd2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 kubectl -- exec busybox-7b57f96db7-txqg2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 kubectl -- exec busybox-7b57f96db7-4gfxm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 kubectl -- exec busybox-7b57f96db7-4gfxm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 kubectl -- exec busybox-7b57f96db7-npxd2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 kubectl -- exec busybox-7b57f96db7-npxd2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 kubectl -- exec busybox-7b57f96db7-txqg2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 kubectl -- exec busybox-7b57f96db7-txqg2 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 node add --alsologtostderr -v 5
E1119 22:03:25.486349    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:03:25.492802    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:03:25.504283    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:03:25.525764    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:03:25.567806    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:03:25.649229    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:03:25.810754    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:03:26.132401    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:03:26.774292    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:03:28.056583    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:03:30.618316    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:03:35.739801    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:03:45.981744    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-595717 node add --alsologtostderr -v 5: (59.796660098s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-595717 status --alsologtostderr -v 5: (1.500456948s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-595717 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.159995577s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-595717 status --output json --alsologtostderr -v 5: (1.086907176s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 cp testdata/cp-test.txt ha-595717:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 cp ha-595717:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile555613663/001/cp-test_ha-595717.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 cp ha-595717:/home/docker/cp-test.txt ha-595717-m02:/home/docker/cp-test_ha-595717_ha-595717-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m02 "sudo cat /home/docker/cp-test_ha-595717_ha-595717-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 cp ha-595717:/home/docker/cp-test.txt ha-595717-m03:/home/docker/cp-test_ha-595717_ha-595717-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m03 "sudo cat /home/docker/cp-test_ha-595717_ha-595717-m03.txt"
E1119 22:04:06.463722    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 cp ha-595717:/home/docker/cp-test.txt ha-595717-m04:/home/docker/cp-test_ha-595717_ha-595717-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m04 "sudo cat /home/docker/cp-test_ha-595717_ha-595717-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 cp testdata/cp-test.txt ha-595717-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 cp ha-595717-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile555613663/001/cp-test_ha-595717-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 cp ha-595717-m02:/home/docker/cp-test.txt ha-595717:/home/docker/cp-test_ha-595717-m02_ha-595717.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717 "sudo cat /home/docker/cp-test_ha-595717-m02_ha-595717.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 cp ha-595717-m02:/home/docker/cp-test.txt ha-595717-m03:/home/docker/cp-test_ha-595717-m02_ha-595717-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m03 "sudo cat /home/docker/cp-test_ha-595717-m02_ha-595717-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 cp ha-595717-m02:/home/docker/cp-test.txt ha-595717-m04:/home/docker/cp-test_ha-595717-m02_ha-595717-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m04 "sudo cat /home/docker/cp-test_ha-595717-m02_ha-595717-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 cp testdata/cp-test.txt ha-595717-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 cp ha-595717-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile555613663/001/cp-test_ha-595717-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 cp ha-595717-m03:/home/docker/cp-test.txt ha-595717:/home/docker/cp-test_ha-595717-m03_ha-595717.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717 "sudo cat /home/docker/cp-test_ha-595717-m03_ha-595717.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 cp ha-595717-m03:/home/docker/cp-test.txt ha-595717-m02:/home/docker/cp-test_ha-595717-m03_ha-595717-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m02 "sudo cat /home/docker/cp-test_ha-595717-m03_ha-595717-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 cp ha-595717-m03:/home/docker/cp-test.txt ha-595717-m04:/home/docker/cp-test_ha-595717-m03_ha-595717-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m04 "sudo cat /home/docker/cp-test_ha-595717-m03_ha-595717-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 cp testdata/cp-test.txt ha-595717-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 cp ha-595717-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile555613663/001/cp-test_ha-595717-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 cp ha-595717-m04:/home/docker/cp-test.txt ha-595717:/home/docker/cp-test_ha-595717-m04_ha-595717.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717 "sudo cat /home/docker/cp-test_ha-595717-m04_ha-595717.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 cp ha-595717-m04:/home/docker/cp-test.txt ha-595717-m02:/home/docker/cp-test_ha-595717-m04_ha-595717-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m02 "sudo cat /home/docker/cp-test_ha-595717-m04_ha-595717-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 cp ha-595717-m04:/home/docker/cp-test.txt ha-595717-m03:/home/docker/cp-test_ha-595717-m04_ha-595717-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 ssh -n ha-595717-m03 "sudo cat /home/docker/cp-test_ha-595717-m04_ha-595717-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-595717 node stop m02 --alsologtostderr -v 5: (12.154946657s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-595717 status --alsologtostderr -v 5: exit status 7 (793.973968ms)

                                                
                                                
-- stdout --
	ha-595717
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-595717-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-595717-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-595717-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:04:34.088826   59047 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:04:34.088946   59047 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:04:34.088956   59047 out.go:374] Setting ErrFile to fd 2...
	I1119 22:04:34.088962   59047 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:04:34.089225   59047 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
	I1119 22:04:34.089409   59047 out.go:368] Setting JSON to false
	I1119 22:04:34.089433   59047 mustload.go:66] Loading cluster: ha-595717
	I1119 22:04:34.089816   59047 config.go:182] Loaded profile config "ha-595717": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:04:34.089832   59047 status.go:174] checking status of ha-595717 ...
	I1119 22:04:34.090372   59047 cli_runner.go:164] Run: docker container inspect ha-595717 --format={{.State.Status}}
	I1119 22:04:34.090639   59047 notify.go:221] Checking for updates...
	I1119 22:04:34.110421   59047 status.go:371] ha-595717 host status = "Running" (err=<nil>)
	I1119 22:04:34.110448   59047 host.go:66] Checking if "ha-595717" exists ...
	I1119 22:04:34.110739   59047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-595717
	I1119 22:04:34.132208   59047 host.go:66] Checking if "ha-595717" exists ...
	I1119 22:04:34.132516   59047 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:04:34.132567   59047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-595717
	I1119 22:04:34.167102   59047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/ha-595717/id_rsa Username:docker}
	I1119 22:04:34.279726   59047 ssh_runner.go:195] Run: systemctl --version
	I1119 22:04:34.286294   59047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:04:34.299441   59047 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:04:34.362860   59047 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-19 22:04:34.353262283 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:04:34.363377   59047 kubeconfig.go:125] found "ha-595717" server: "https://192.168.49.254:8443"
	I1119 22:04:34.363414   59047 api_server.go:166] Checking apiserver status ...
	I1119 22:04:34.363467   59047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:04:34.377826   59047 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1433/cgroup
	I1119 22:04:34.386749   59047 api_server.go:182] apiserver freezer: "3:freezer:/docker/795c00b2dc9c2d80698c4e1262b8bef6342f4cb60c8db3d5e4b7b2a82982d1d0/kubepods/burstable/podc80a6774c13165b8dec9e3f7e963aacc/9306c4d86d9d6fb25fce39e7840f7bb34bba85e0641ab77963b5dde340ae4570"
	I1119 22:04:34.386844   59047 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/795c00b2dc9c2d80698c4e1262b8bef6342f4cb60c8db3d5e4b7b2a82982d1d0/kubepods/burstable/podc80a6774c13165b8dec9e3f7e963aacc/9306c4d86d9d6fb25fce39e7840f7bb34bba85e0641ab77963b5dde340ae4570/freezer.state
	I1119 22:04:34.394679   59047 api_server.go:204] freezer state: "THAWED"
	I1119 22:04:34.394707   59047 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1119 22:04:34.404434   59047 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1119 22:04:34.404470   59047 status.go:463] ha-595717 apiserver status = Running (err=<nil>)
	I1119 22:04:34.404481   59047 status.go:176] ha-595717 status: &{Name:ha-595717 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:04:34.404498   59047 status.go:174] checking status of ha-595717-m02 ...
	I1119 22:04:34.404809   59047 cli_runner.go:164] Run: docker container inspect ha-595717-m02 --format={{.State.Status}}
	I1119 22:04:34.421895   59047 status.go:371] ha-595717-m02 host status = "Stopped" (err=<nil>)
	I1119 22:04:34.421917   59047 status.go:384] host is not running, skipping remaining checks
	I1119 22:04:34.421924   59047 status.go:176] ha-595717-m02 status: &{Name:ha-595717-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:04:34.421943   59047 status.go:174] checking status of ha-595717-m03 ...
	I1119 22:04:34.422299   59047 cli_runner.go:164] Run: docker container inspect ha-595717-m03 --format={{.State.Status}}
	I1119 22:04:34.440009   59047 status.go:371] ha-595717-m03 host status = "Running" (err=<nil>)
	I1119 22:04:34.440039   59047 host.go:66] Checking if "ha-595717-m03" exists ...
	I1119 22:04:34.440340   59047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-595717-m03
	I1119 22:04:34.458069   59047 host.go:66] Checking if "ha-595717-m03" exists ...
	I1119 22:04:34.458462   59047 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:04:34.458509   59047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-595717-m03
	I1119 22:04:34.475988   59047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/ha-595717-m03/id_rsa Username:docker}
	I1119 22:04:34.579836   59047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:04:34.596494   59047 kubeconfig.go:125] found "ha-595717" server: "https://192.168.49.254:8443"
	I1119 22:04:34.596524   59047 api_server.go:166] Checking apiserver status ...
	I1119 22:04:34.596572   59047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:04:34.611724   59047 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1430/cgroup
	I1119 22:04:34.623467   59047 api_server.go:182] apiserver freezer: "3:freezer:/docker/8ce82fe7cad573adefd20f861f51481685cd7cb3055f33474e51e6e09ce10241/kubepods/burstable/pod5aea99a3d18f50434301410ea76dfcc2/c3777625f09a2cc403e1e76440b85fc5e002a804d40f66706f7db7f2a4596935"
	I1119 22:04:34.623605   59047 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8ce82fe7cad573adefd20f861f51481685cd7cb3055f33474e51e6e09ce10241/kubepods/burstable/pod5aea99a3d18f50434301410ea76dfcc2/c3777625f09a2cc403e1e76440b85fc5e002a804d40f66706f7db7f2a4596935/freezer.state
	I1119 22:04:34.633193   59047 api_server.go:204] freezer state: "THAWED"
	I1119 22:04:34.633222   59047 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1119 22:04:34.641923   59047 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1119 22:04:34.642014   59047 status.go:463] ha-595717-m03 apiserver status = Running (err=<nil>)
	I1119 22:04:34.642038   59047 status.go:176] ha-595717-m03 status: &{Name:ha-595717-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:04:34.642091   59047 status.go:174] checking status of ha-595717-m04 ...
	I1119 22:04:34.642545   59047 cli_runner.go:164] Run: docker container inspect ha-595717-m04 --format={{.State.Status}}
	I1119 22:04:34.660346   59047 status.go:371] ha-595717-m04 host status = "Running" (err=<nil>)
	I1119 22:04:34.660431   59047 host.go:66] Checking if "ha-595717-m04" exists ...
	I1119 22:04:34.660738   59047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-595717-m04
	I1119 22:04:34.682410   59047 host.go:66] Checking if "ha-595717-m04" exists ...
	I1119 22:04:34.682714   59047 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:04:34.682758   59047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-595717-m04
	I1119 22:04:34.701109   59047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/ha-595717-m04/id_rsa Username:docker}
	I1119 22:04:34.804647   59047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:04:34.820711   59047 status.go:176] ha-595717-m04 status: &{Name:ha-595717-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (16.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 node start m02 --alsologtostderr -v 5
E1119 22:04:47.425232    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-595717 node start m02 --alsologtostderr -v 5: (15.066733241s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-595717 status --alsologtostderr -v 5: (1.35184568s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (16.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.176197779s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (101.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-595717 stop --alsologtostderr -v 5: (37.62870258s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 start --wait true --alsologtostderr -v 5
E1119 22:06:09.347870    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-595717 start --wait true --alsologtostderr -v 5: (1m3.597672971s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (101.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-595717 node delete m03 --alsologtostderr -v 5: (9.96912127s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 stop --alsologtostderr -v 5
E1119 22:06:48.008718    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-595717 stop --alsologtostderr -v 5: (36.48280385s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-595717 status --alsologtostderr -v 5: exit status 7 (110.510416ms)

                                                
                                                
-- stdout --
	ha-595717
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-595717-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-595717-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:07:23.261672   73881 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:07:23.261794   73881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:07:23.261799   73881 out.go:374] Setting ErrFile to fd 2...
	I1119 22:07:23.261803   73881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:07:23.262047   73881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
	I1119 22:07:23.262260   73881 out.go:368] Setting JSON to false
	I1119 22:07:23.262282   73881 mustload.go:66] Loading cluster: ha-595717
	I1119 22:07:23.262438   73881 notify.go:221] Checking for updates...
	I1119 22:07:23.262676   73881 config.go:182] Loaded profile config "ha-595717": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:07:23.262695   73881 status.go:174] checking status of ha-595717 ...
	I1119 22:07:23.263201   73881 cli_runner.go:164] Run: docker container inspect ha-595717 --format={{.State.Status}}
	I1119 22:07:23.282981   73881 status.go:371] ha-595717 host status = "Stopped" (err=<nil>)
	I1119 22:07:23.283005   73881 status.go:384] host is not running, skipping remaining checks
	I1119 22:07:23.283012   73881 status.go:176] ha-595717 status: &{Name:ha-595717 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:07:23.283051   73881 status.go:174] checking status of ha-595717-m02 ...
	I1119 22:07:23.283353   73881 cli_runner.go:164] Run: docker container inspect ha-595717-m02 --format={{.State.Status}}
	I1119 22:07:23.311850   73881 status.go:371] ha-595717-m02 host status = "Stopped" (err=<nil>)
	I1119 22:07:23.311874   73881 status.go:384] host is not running, skipping remaining checks
	I1119 22:07:23.311881   73881 status.go:176] ha-595717-m02 status: &{Name:ha-595717-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:07:23.311899   73881 status.go:174] checking status of ha-595717-m04 ...
	I1119 22:07:23.312181   73881 cli_runner.go:164] Run: docker container inspect ha-595717-m04 --format={{.State.Status}}
	I1119 22:07:23.329824   73881 status.go:371] ha-595717-m04 host status = "Stopped" (err=<nil>)
	I1119 22:07:23.329850   73881 status.go:384] host is not running, skipping remaining checks
	I1119 22:07:23.329857   73881 status.go:176] ha-595717-m04 status: &{Name:ha-595717-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (62.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-595717 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m1.156746511s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
E1119 22:08:25.485043    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/RestartCluster (62.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (48.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 node add --control-plane --alsologtostderr -v 5
E1119 22:08:53.193449    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-595717 node add --control-plane --alsologtostderr -v 5: (47.097485591s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-595717 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-595717 status --alsologtostderr -v 5: (1.097607938s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (48.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.138315926s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.14s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.62s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-727137 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-727137 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m23.612947972s)
--- PASS: TestJSONOutput/start/Command (83.62s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-727137 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-727137 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.98s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-727137 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-727137 --output=json --user=testUser: (5.983523478s)
--- PASS: TestJSONOutput/stop/Command (5.98s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-563807 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-563807 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (103.912821ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9d541f99-c063-4c6d-adf4-28cf1e66a304","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-563807] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f4b63165-8559-49f6-afef-53cb22ac7efe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21918"}}
	{"specversion":"1.0","id":"27d795e7-5266-4700-87ed-723f8298ad60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1b947665-c61b-417a-87d5-c02d4f732092","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig"}}
	{"specversion":"1.0","id":"04ac5ef2-8142-43d5-865d-39473a512456","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube"}}
	{"specversion":"1.0","id":"90905f97-94ab-4261-9672-82b7e0c217df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1be5497d-b182-46f4-bfa8-2d6a3922db78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fb32d33f-5109-41f5-997e-856d720200e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-563807" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-563807
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (50.12s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-368109 --network=
E1119 22:11:48.008435    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-368109 --network=: (47.828151311s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-368109" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-368109
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-368109: (2.264057708s)
--- PASS: TestKicCustomNetwork/create_custom_network (50.12s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.96s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-709500 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-709500 --network=bridge: (35.745819247s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-709500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-709500
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-709500: (2.189059468s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.96s)

                                                
                                    
x
+
TestKicExistingNetwork (37.52s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1119 22:12:28.911268    4144 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1119 22:12:28.927537    4144 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1119 22:12:28.927605    4144 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1119 22:12:28.927623    4144 cli_runner.go:164] Run: docker network inspect existing-network
W1119 22:12:28.944998    4144 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1119 22:12:28.945029    4144 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1119 22:12:28.945044    4144 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1119 22:12:28.945142    4144 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1119 22:12:28.962821    4144 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b0fa93c84379 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:8f:4f:8f:5a:a3} reservation:<nil>}
I1119 22:12:28.963132    4144 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c65090}
I1119 22:12:28.963153    4144 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1119 22:12:28.963201    4144 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1119 22:12:29.029619    4144 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-661945 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-661945 --network=existing-network: (35.308856172s)
helpers_test.go:175: Cleaning up "existing-network-661945" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-661945
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-661945: (2.056117387s)
I1119 22:13:06.411182    4144 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (37.52s)

                                                
                                    
x
+
TestKicCustomSubnet (37.33s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-706410 --subnet=192.168.60.0/24
E1119 22:13:11.075141    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:13:25.489832    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-706410 --subnet=192.168.60.0/24: (35.066232516s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-706410 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-706410" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-706410
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-706410: (2.232930799s)
--- PASS: TestKicCustomSubnet (37.33s)

                                                
                                    
x
+
TestKicStaticIP (37.77s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-223146 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-223146 --static-ip=192.168.200.200: (35.353843041s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-223146 ip
helpers_test.go:175: Cleaning up "static-ip-223146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-223146
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-223146: (2.26138018s)
--- PASS: TestKicStaticIP (37.77s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.78s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-687893 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-687893 --driver=docker  --container-runtime=containerd: (30.486499999s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-690592 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-690592 --driver=docker  --container-runtime=containerd: (37.340300028s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-687893
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-690592
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-690592" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-690592
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-690592: (2.212166293s)
helpers_test.go:175: Cleaning up "first-687893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-687893
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-687893: (2.045946577s)
--- PASS: TestMinikubeProfile (73.78s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-663098 --memory=3072 --mount-string /tmp/TestMountStartserial1671779911/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-663098 --memory=3072 --mount-string /tmp/TestMountStartserial1671779911/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.398236651s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-663098 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-665051 --memory=3072 --mount-string /tmp/TestMountStartserial1671779911/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-665051 --memory=3072 --mount-string /tmp/TestMountStartserial1671779911/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.161553324s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-665051 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-663098 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-663098 --alsologtostderr -v=5: (1.716315631s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-665051 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-665051
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-665051: (1.292246848s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.75s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-665051
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-665051: (6.751904267s)
--- PASS: TestMountStart/serial/RestartStopped (7.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-665051 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (108.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-322294 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1119 22:16:48.008465    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-322294 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m47.946078558s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (108.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-322294 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-322294 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-322294 -- rollout status deployment/busybox: (3.323841099s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-322294 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-322294 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-322294 -- exec busybox-7b57f96db7-747s4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-322294 -- exec busybox-7b57f96db7-r5nzl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-322294 -- exec busybox-7b57f96db7-747s4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-322294 -- exec busybox-7b57f96db7-r5nzl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-322294 -- exec busybox-7b57f96db7-747s4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-322294 -- exec busybox-7b57f96db7-r5nzl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-322294 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-322294 -- exec busybox-7b57f96db7-747s4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-322294 -- exec busybox-7b57f96db7-747s4 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-322294 -- exec busybox-7b57f96db7-r5nzl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-322294 -- exec busybox-7b57f96db7-r5nzl -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.32s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-322294 -v=5 --alsologtostderr
E1119 22:18:25.485343    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-322294 -v=5 --alsologtostderr: (28.019236392s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.70s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-322294 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 cp testdata/cp-test.txt multinode-322294:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 ssh -n multinode-322294 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 cp multinode-322294:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3229449320/001/cp-test_multinode-322294.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 ssh -n multinode-322294 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 cp multinode-322294:/home/docker/cp-test.txt multinode-322294-m02:/home/docker/cp-test_multinode-322294_multinode-322294-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 ssh -n multinode-322294 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 ssh -n multinode-322294-m02 "sudo cat /home/docker/cp-test_multinode-322294_multinode-322294-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 cp multinode-322294:/home/docker/cp-test.txt multinode-322294-m03:/home/docker/cp-test_multinode-322294_multinode-322294-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 ssh -n multinode-322294 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 ssh -n multinode-322294-m03 "sudo cat /home/docker/cp-test_multinode-322294_multinode-322294-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 cp testdata/cp-test.txt multinode-322294-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 ssh -n multinode-322294-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 cp multinode-322294-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3229449320/001/cp-test_multinode-322294-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 ssh -n multinode-322294-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 cp multinode-322294-m02:/home/docker/cp-test.txt multinode-322294:/home/docker/cp-test_multinode-322294-m02_multinode-322294.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 ssh -n multinode-322294-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 ssh -n multinode-322294 "sudo cat /home/docker/cp-test_multinode-322294-m02_multinode-322294.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 cp multinode-322294-m02:/home/docker/cp-test.txt multinode-322294-m03:/home/docker/cp-test_multinode-322294-m02_multinode-322294-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 ssh -n multinode-322294-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 ssh -n multinode-322294-m03 "sudo cat /home/docker/cp-test_multinode-322294-m02_multinode-322294-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 cp testdata/cp-test.txt multinode-322294-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 ssh -n multinode-322294-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 cp multinode-322294-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3229449320/001/cp-test_multinode-322294-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 ssh -n multinode-322294-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 cp multinode-322294-m03:/home/docker/cp-test.txt multinode-322294:/home/docker/cp-test_multinode-322294-m03_multinode-322294.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 ssh -n multinode-322294-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 ssh -n multinode-322294 "sudo cat /home/docker/cp-test_multinode-322294-m03_multinode-322294.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 cp multinode-322294-m03:/home/docker/cp-test.txt multinode-322294-m02:/home/docker/cp-test_multinode-322294-m03_multinode-322294-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 ssh -n multinode-322294-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 ssh -n multinode-322294-m02 "sudo cat /home/docker/cp-test_multinode-322294-m03_multinode-322294-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-322294 node stop m03: (1.354045674s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-322294 status: exit status 7 (556.153879ms)

                                                
                                                
-- stdout --
	multinode-322294
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-322294-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-322294-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-322294 status --alsologtostderr: exit status 7 (540.725779ms)

                                                
                                                
-- stdout --
	multinode-322294
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-322294-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-322294-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:18:42.692898  127108 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:18:42.693154  127108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:18:42.693186  127108 out.go:374] Setting ErrFile to fd 2...
	I1119 22:18:42.693220  127108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:18:42.693612  127108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
	I1119 22:18:42.694205  127108 out.go:368] Setting JSON to false
	I1119 22:18:42.694282  127108 mustload.go:66] Loading cluster: multinode-322294
	I1119 22:18:42.694348  127108 notify.go:221] Checking for updates...
	I1119 22:18:42.695288  127108 config.go:182] Loaded profile config "multinode-322294": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:18:42.695345  127108 status.go:174] checking status of multinode-322294 ...
	I1119 22:18:42.696242  127108 cli_runner.go:164] Run: docker container inspect multinode-322294 --format={{.State.Status}}
	I1119 22:18:42.721248  127108 status.go:371] multinode-322294 host status = "Running" (err=<nil>)
	I1119 22:18:42.721271  127108 host.go:66] Checking if "multinode-322294" exists ...
	I1119 22:18:42.721592  127108 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-322294
	I1119 22:18:42.748763  127108 host.go:66] Checking if "multinode-322294" exists ...
	I1119 22:18:42.749055  127108 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:18:42.749096  127108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-322294
	I1119 22:18:42.769748  127108 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/multinode-322294/id_rsa Username:docker}
	I1119 22:18:42.868566  127108 ssh_runner.go:195] Run: systemctl --version
	I1119 22:18:42.875262  127108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:18:42.888738  127108 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:18:42.951255  127108 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 22:18:42.941757365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:18:42.951820  127108 kubeconfig.go:125] found "multinode-322294" server: "https://192.168.67.2:8443"
	I1119 22:18:42.951859  127108 api_server.go:166] Checking apiserver status ...
	I1119 22:18:42.951915  127108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:18:42.964394  127108 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1332/cgroup
	I1119 22:18:42.973033  127108 api_server.go:182] apiserver freezer: "3:freezer:/docker/bbf1d51e4d8634afcee32b300210e5991dd0e6a41318622bbb86cef4035ed82f/kubepods/burstable/pod0cc55d16e98bd40d0bc47aa830cc9287/16ca10689c0234e869d5e125efb0bddcfa6b241ab2236701715981d003155e1a"
	I1119 22:18:42.973119  127108 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bbf1d51e4d8634afcee32b300210e5991dd0e6a41318622bbb86cef4035ed82f/kubepods/burstable/pod0cc55d16e98bd40d0bc47aa830cc9287/16ca10689c0234e869d5e125efb0bddcfa6b241ab2236701715981d003155e1a/freezer.state
	I1119 22:18:42.981315  127108 api_server.go:204] freezer state: "THAWED"
	I1119 22:18:42.981346  127108 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1119 22:18:42.990831  127108 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1119 22:18:42.990859  127108 status.go:463] multinode-322294 apiserver status = Running (err=<nil>)
	I1119 22:18:42.990871  127108 status.go:176] multinode-322294 status: &{Name:multinode-322294 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:18:42.990887  127108 status.go:174] checking status of multinode-322294-m02 ...
	I1119 22:18:42.991192  127108 cli_runner.go:164] Run: docker container inspect multinode-322294-m02 --format={{.State.Status}}
	I1119 22:18:43.009329  127108 status.go:371] multinode-322294-m02 host status = "Running" (err=<nil>)
	I1119 22:18:43.009355  127108 host.go:66] Checking if "multinode-322294-m02" exists ...
	I1119 22:18:43.009679  127108 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-322294-m02
	I1119 22:18:43.027466  127108 host.go:66] Checking if "multinode-322294-m02" exists ...
	I1119 22:18:43.027784  127108 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:18:43.027837  127108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-322294-m02
	I1119 22:18:43.046062  127108 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21918-2347/.minikube/machines/multinode-322294-m02/id_rsa Username:docker}
	I1119 22:18:43.147530  127108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:18:43.161556  127108 status.go:176] multinode-322294-m02 status: &{Name:multinode-322294-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:18:43.161591  127108 status.go:174] checking status of multinode-322294-m03 ...
	I1119 22:18:43.162001  127108 cli_runner.go:164] Run: docker container inspect multinode-322294-m03 --format={{.State.Status}}
	I1119 22:18:43.179646  127108 status.go:371] multinode-322294-m03 host status = "Stopped" (err=<nil>)
	I1119 22:18:43.179671  127108 status.go:384] host is not running, skipping remaining checks
	I1119 22:18:43.179678  127108 status.go:176] multinode-322294-m03 status: &{Name:multinode-322294-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-322294 node start m03 -v=5 --alsologtostderr: (7.134348976s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-322294
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-322294
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-322294: (25.154564825s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-322294 --wait=true -v=5 --alsologtostderr
E1119 22:19:48.554785    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-322294 --wait=true -v=5 --alsologtostderr: (47.692563421s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-322294
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-322294 node delete m03: (4.960165432s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-322294 stop: (23.957057951s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-322294 status: exit status 7 (91.227577ms)

                                                
                                                
-- stdout --
	multinode-322294
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-322294-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-322294 status --alsologtostderr: exit status 7 (94.324634ms)

                                                
                                                
-- stdout --
	multinode-322294
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-322294-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:20:33.887487  135890 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:20:33.887667  135890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:20:33.887694  135890 out.go:374] Setting ErrFile to fd 2...
	I1119 22:20:33.887717  135890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:20:33.888008  135890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
	I1119 22:20:33.888224  135890 out.go:368] Setting JSON to false
	I1119 22:20:33.888285  135890 mustload.go:66] Loading cluster: multinode-322294
	I1119 22:20:33.888354  135890 notify.go:221] Checking for updates...
	I1119 22:20:33.888741  135890 config.go:182] Loaded profile config "multinode-322294": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:20:33.888782  135890 status.go:174] checking status of multinode-322294 ...
	I1119 22:20:33.889331  135890 cli_runner.go:164] Run: docker container inspect multinode-322294 --format={{.State.Status}}
	I1119 22:20:33.909192  135890 status.go:371] multinode-322294 host status = "Stopped" (err=<nil>)
	I1119 22:20:33.909215  135890 status.go:384] host is not running, skipping remaining checks
	I1119 22:20:33.909221  135890 status.go:176] multinode-322294 status: &{Name:multinode-322294 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:20:33.909253  135890 status.go:174] checking status of multinode-322294-m02 ...
	I1119 22:20:33.909552  135890 cli_runner.go:164] Run: docker container inspect multinode-322294-m02 --format={{.State.Status}}
	I1119 22:20:33.931907  135890 status.go:371] multinode-322294-m02 host status = "Stopped" (err=<nil>)
	I1119 22:20:33.931932  135890 status.go:384] host is not running, skipping remaining checks
	I1119 22:20:33.931946  135890 status.go:176] multinode-322294-m02 status: &{Name:multinode-322294-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-322294 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-322294 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (50.829648672s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-322294 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.52s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-322294
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-322294-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-322294-m02 --driver=docker  --container-runtime=containerd: exit status 14 (97.403413ms)

                                                
                                                
-- stdout --
	* [multinode-322294-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-322294-m02' is duplicated with machine name 'multinode-322294-m02' in profile 'multinode-322294'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-322294-m03 --driver=docker  --container-runtime=containerd
E1119 22:21:48.008424    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-322294-m03 --driver=docker  --container-runtime=containerd: (36.703483783s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-322294
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-322294: exit status 80 (351.913707ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-322294 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-322294-m03 already exists in multinode-322294-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-322294-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-322294-m03: (2.102317703s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.31s)

                                                
                                    
x
+
TestPreload (152.09s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-005285 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-005285 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (57.697938203s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-005285 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-005285 image pull gcr.io/k8s-minikube/busybox: (2.159391523s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-005285
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-005285: (5.900157634s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-005285 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1119 22:23:25.490283    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-005285 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m23.502149624s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-005285 image list
helpers_test.go:175: Cleaning up "test-preload-005285" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-005285
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-005285: (2.581595504s)
--- PASS: TestPreload (152.09s)

                                                
                                    
x
+
TestScheduledStopUnix (109.55s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-515329 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-515329 --memory=3072 --driver=docker  --container-runtime=containerd: (33.416437486s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-515329 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 22:25:14.539145  151793 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:25:14.539346  151793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:25:14.539356  151793 out.go:374] Setting ErrFile to fd 2...
	I1119 22:25:14.539361  151793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:25:14.539632  151793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
	I1119 22:25:14.539891  151793 out.go:368] Setting JSON to false
	I1119 22:25:14.540000  151793 mustload.go:66] Loading cluster: scheduled-stop-515329
	I1119 22:25:14.540358  151793 config.go:182] Loaded profile config "scheduled-stop-515329": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:25:14.540443  151793 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/scheduled-stop-515329/config.json ...
	I1119 22:25:14.540629  151793 mustload.go:66] Loading cluster: scheduled-stop-515329
	I1119 22:25:14.540768  151793 config.go:182] Loaded profile config "scheduled-stop-515329": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-515329 -n scheduled-stop-515329
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-515329 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 22:25:14.995969  151880 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:25:14.996099  151880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:25:14.996104  151880 out.go:374] Setting ErrFile to fd 2...
	I1119 22:25:14.996109  151880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:25:14.996413  151880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
	I1119 22:25:14.996680  151880 out.go:368] Setting JSON to false
	I1119 22:25:14.999041  151880 daemonize_unix.go:73] killing process 151814 as it is an old scheduled stop
	I1119 22:25:15.001396  151880 mustload.go:66] Loading cluster: scheduled-stop-515329
	I1119 22:25:15.005574  151880 config.go:182] Loaded profile config "scheduled-stop-515329": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:25:15.005797  151880 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/scheduled-stop-515329/config.json ...
	I1119 22:25:15.006208  151880 mustload.go:66] Loading cluster: scheduled-stop-515329
	I1119 22:25:15.006473  151880 config.go:182] Loaded profile config "scheduled-stop-515329": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1119 22:25:15.035560    4144 retry.go:31] will retry after 70.101µs: open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/scheduled-stop-515329/pid: no such file or directory
I1119 22:25:15.036723    4144 retry.go:31] will retry after 216.568µs: open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/scheduled-stop-515329/pid: no such file or directory
I1119 22:25:15.037500    4144 retry.go:31] will retry after 131.037µs: open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/scheduled-stop-515329/pid: no such file or directory
I1119 22:25:15.038399    4144 retry.go:31] will retry after 253.271µs: open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/scheduled-stop-515329/pid: no such file or directory
I1119 22:25:15.039669    4144 retry.go:31] will retry after 507.84µs: open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/scheduled-stop-515329/pid: no such file or directory
I1119 22:25:15.040837    4144 retry.go:31] will retry after 411.487µs: open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/scheduled-stop-515329/pid: no such file or directory
I1119 22:25:15.042109    4144 retry.go:31] will retry after 824.266µs: open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/scheduled-stop-515329/pid: no such file or directory
I1119 22:25:15.043406    4144 retry.go:31] will retry after 1.130524ms: open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/scheduled-stop-515329/pid: no such file or directory
I1119 22:25:15.045661    4144 retry.go:31] will retry after 1.994759ms: open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/scheduled-stop-515329/pid: no such file or directory
I1119 22:25:15.047990    4144 retry.go:31] will retry after 2.30225ms: open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/scheduled-stop-515329/pid: no such file or directory
I1119 22:25:15.050643    4144 retry.go:31] will retry after 8.592281ms: open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/scheduled-stop-515329/pid: no such file or directory
I1119 22:25:15.059940    4144 retry.go:31] will retry after 5.265013ms: open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/scheduled-stop-515329/pid: no such file or directory
I1119 22:25:15.067005    4144 retry.go:31] will retry after 17.491265ms: open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/scheduled-stop-515329/pid: no such file or directory
I1119 22:25:15.084690    4144 retry.go:31] will retry after 15.418853ms: open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/scheduled-stop-515329/pid: no such file or directory
I1119 22:25:15.100936    4144 retry.go:31] will retry after 19.238552ms: open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/scheduled-stop-515329/pid: no such file or directory
I1119 22:25:15.121200    4144 retry.go:31] will retry after 63.645117ms: open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/scheduled-stop-515329/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-515329 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-515329 -n scheduled-stop-515329
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-515329
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-515329 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 22:25:40.971207  152376 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:25:40.971432  152376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:25:40.971464  152376 out.go:374] Setting ErrFile to fd 2...
	I1119 22:25:40.971484  152376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:25:40.971755  152376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
	I1119 22:25:40.972045  152376 out.go:368] Setting JSON to false
	I1119 22:25:40.972169  152376 mustload.go:66] Loading cluster: scheduled-stop-515329
	I1119 22:25:40.975449  152376 config.go:182] Loaded profile config "scheduled-stop-515329": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:25:40.975689  152376 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/scheduled-stop-515329/config.json ...
	I1119 22:25:40.976052  152376 mustload.go:66] Loading cluster: scheduled-stop-515329
	I1119 22:25:40.976275  152376 config.go:182] Loaded profile config "scheduled-stop-515329": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-515329
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-515329: exit status 7 (74.89783ms)

                                                
                                                
-- stdout --
	scheduled-stop-515329
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-515329 -n scheduled-stop-515329
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-515329 -n scheduled-stop-515329: exit status 7 (68.666335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-515329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-515329
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-515329: (4.481460608s)
--- PASS: TestScheduledStopUnix (109.55s)

                                                
                                    
x
+
TestInsufficientStorage (13.06s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-621483 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-621483 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.478839278s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6f1f1eaf-3296-4230-b23e-555d4cba852d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-621483] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cef2e594-4034-423f-8934-d5c3921d3de9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21918"}}
	{"specversion":"1.0","id":"2ec39621-be75-43b0-a759-1fe33038699d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"03c4aaf9-ed0e-4301-a9b2-ca0798fa2016","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig"}}
	{"specversion":"1.0","id":"b4af0d43-dbf0-4a5d-8890-f74e98486554","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube"}}
	{"specversion":"1.0","id":"c16eba3f-7e97-4f33-a60d-ee834332d654","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e3f8a7bd-360d-4937-9df0-4c3cf1496b07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a6ad1f21-0492-439d-9438-22baff3c0007","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"04fd3ea8-6c6e-456c-a223-03092baaeab0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2ae57950-6c8b-49b8-a696-142df1eef2f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7feb494e-ab90-46c8-8128-86d3760f7c67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2ad71da6-7585-4fd0-b3a6-5ffc7cba3754","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-621483\" primary control-plane node in \"insufficient-storage-621483\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1bec9c25-f0f3-463f-b1c5-493fdfbd2ba2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763561786-21918 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"37c50de2-6aa4-4ad0-a1ee-4e95a5a35957","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3a8ee456-a5da-47c8-a994-680a17c0301a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-621483 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-621483 --output=json --layout=cluster: exit status 7 (302.345253ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-621483","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-621483","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1119 22:26:41.372437  153998 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-621483" does not appear in /home/jenkins/minikube-integration/21918-2347/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-621483 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-621483 --output=json --layout=cluster: exit status 7 (307.243403ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-621483","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-621483","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1119 22:26:41.681170  154063 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-621483" does not appear in /home/jenkins/minikube-integration/21918-2347/kubeconfig
	E1119 22:26:41.691905  154063 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/insufficient-storage-621483/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-621483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-621483
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-621483: (1.969211328s)
--- PASS: TestInsufficientStorage (13.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (66.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1400467982 start -p running-upgrade-737412 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1400467982 start -p running-upgrade-737412 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (35.084903321s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-737412 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-737412 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (27.851342921s)
helpers_test.go:175: Cleaning up "running-upgrade-737412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-737412
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-737412: (2.052402706s)
--- PASS: TestRunningBinaryUpgrade (66.22s)

                                                
                                    
x
+
TestKubernetesUpgrade (359.78s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-176802 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-176802 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.469415018s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-176802
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-176802: (1.47591766s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-176802 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-176802 status --format={{.Host}}: exit status 7 (112.962404ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-176802 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-176802 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m53.607369073s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-176802 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-176802 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-176802 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (109.674609ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-176802] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-176802
	    minikube start -p kubernetes-upgrade-176802 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1768022 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-176802 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-176802 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-176802 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (21.631409236s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-176802" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-176802
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-176802: (3.270003141s)
--- PASS: TestKubernetesUpgrade (359.78s)

                                                
                                    
x
+
TestMissingContainerUpgrade (152.98s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3084335888 start -p missing-upgrade-627263 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3084335888 start -p missing-upgrade-627263 --memory=3072 --driver=docker  --container-runtime=containerd: (1m16.509641031s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-627263
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-627263: (1.830692612s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-627263
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-627263 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1119 22:28:25.485422    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-627263 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.149309615s)
helpers_test.go:175: Cleaning up "missing-upgrade-627263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-627263
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-627263: (1.987781624s)
--- PASS: TestMissingContainerUpgrade (152.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-676141 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-676141 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (107.482659ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-676141] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-676141 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1119 22:26:48.008848    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-676141 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.403091122s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-676141 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-676141 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-676141 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (14.624017591s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-676141 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-676141 status -o json: exit status 2 (478.942723ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-676141","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-676141
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-676141: (2.369871485s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-676141 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-676141 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (10.301816077s)
--- PASS: TestNoKubernetes/serial/Start (10.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21918-2347/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-676141 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-676141 "sudo systemctl is-active --quiet service kubelet": exit status 1 (355.389259ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-676141
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-676141: (1.3908284s)
--- PASS: TestNoKubernetes/serial/Stop (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-676141 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-676141 --driver=docker  --container-runtime=containerd: (8.705001658s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-676141 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-676141 "sudo systemctl is-active --quiet service kubelet": exit status 1 (407.191544ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (67.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.522423892 start -p stopped-upgrade-346748 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1119 22:29:51.077267    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.522423892 start -p stopped-upgrade-346748 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (34.008756955s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.522423892 -p stopped-upgrade-346748 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.522423892 -p stopped-upgrade-346748 stop: (1.34355482s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-346748 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-346748 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (32.469885316s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (67.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-346748
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-346748: (1.419160154s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                    
x
+
TestPause/serial/Start (81.63s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-215582 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1119 22:31:48.008632    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-215582 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m21.630159835s)
--- PASS: TestPause/serial/Start (81.63s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.71s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-215582 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-215582 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.689664438s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.71s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-215582 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-215582 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-215582 --output=json --layout=cluster: exit status 2 (334.403324ms)

                                                
                                                
-- stdout --
	{"Name":"pause-215582","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-215582","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-215582 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.82s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-215582 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.82s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.11s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-215582 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-215582 --alsologtostderr -v=5: (3.110431518s)
--- PASS: TestPause/serial/DeletePaused (3.11s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-215582
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-215582: exit status 1 (16.710964ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-215582: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-156590 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-156590 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (243.268164ms)

                                                
                                                
-- stdout --
	* [false-156590] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:34:05.311928  195921 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:34:05.312162  195921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:34:05.312189  195921 out.go:374] Setting ErrFile to fd 2...
	I1119 22:34:05.312207  195921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:34:05.312508  195921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-2347/.minikube/bin
	I1119 22:34:05.312968  195921 out.go:368] Setting JSON to false
	I1119 22:34:05.313885  195921 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4566,"bootTime":1763587079,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1119 22:34:05.313985  195921 start.go:143] virtualization:  
	I1119 22:34:05.317600  195921 out.go:179] * [false-156590] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:34:05.320840  195921 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:34:05.320903  195921 notify.go:221] Checking for updates...
	I1119 22:34:05.327160  195921 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:34:05.330296  195921 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-2347/kubeconfig
	I1119 22:34:05.333320  195921 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-2347/.minikube
	I1119 22:34:05.336238  195921 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:34:05.339138  195921 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:34:05.342730  195921 config.go:182] Loaded profile config "force-systemd-env-388402": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 22:34:05.342898  195921 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:34:05.378003  195921 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:34:05.378211  195921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:34:05.464028  195921 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:59 SystemTime:2025-11-19 22:34:05.453973847 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:34:05.464133  195921 docker.go:319] overlay module found
	I1119 22:34:05.470214  195921 out.go:179] * Using the docker driver based on user configuration
	I1119 22:34:05.473224  195921 start.go:309] selected driver: docker
	I1119 22:34:05.473253  195921 start.go:930] validating driver "docker" against <nil>
	I1119 22:34:05.473268  195921 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:34:05.476797  195921 out.go:203] 
	W1119 22:34:05.479737  195921 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1119 22:34:05.482522  195921 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-156590 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-156590

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-156590

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-156590

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-156590

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-156590

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-156590

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-156590

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-156590

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-156590

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-156590

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-156590

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-156590" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-156590" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-156590

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-156590"

                                                
                                                
----------------------- debugLogs end: false-156590 [took: 4.843607851s] --------------------------------
helpers_test.go:175: Cleaning up "false-156590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-156590
--- PASS: TestNetworkPlugins/group/false (5.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (72.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-264160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1119 22:36:28.556546    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-264160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m12.993527202s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (72.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-264160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-264160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.058612024s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-264160 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-264160 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-264160 --alsologtostderr -v=3: (12.435448027s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-264160 -n old-k8s-version-264160
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-264160 -n old-k8s-version-264160: exit status 7 (79.503002ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-264160 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (27.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-264160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-264160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (26.935237167s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-264160 -n old-k8s-version-264160
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (27.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xrhgq" [cdbf9835-738e-43c2-beda-ac98e7111a27] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xrhgq" [cdbf9835-738e-43c2-beda-ac98e7111a27] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004020078s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xrhgq" [cdbf9835-738e-43c2-beda-ac98e7111a27] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003584299s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-264160 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-264160 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-264160 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-264160 -n old-k8s-version-264160
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-264160 -n old-k8s-version-264160: exit status 2 (471.856324ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-264160 -n old-k8s-version-264160
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-264160 -n old-k8s-version-264160: exit status 2 (427.037768ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-264160 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-264160 -n old-k8s-version-264160
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-264160 -n old-k8s-version-264160
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-570856 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-570856 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m27.28535557s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (88.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-227235 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1119 22:38:25.485733    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-227235 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m28.126514689s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (88.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-570856 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-570856 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.033005656s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-570856 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-570856 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-570856 --alsologtostderr -v=3: (12.196349171s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-227235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-227235 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-227235 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-227235 --alsologtostderr -v=3: (12.152508053s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-570856 -n default-k8s-diff-port-570856
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-570856 -n default-k8s-diff-port-570856: exit status 7 (68.64591ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-570856 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-570856 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-570856 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (53.800711501s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-570856 -n default-k8s-diff-port-570856
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-227235 -n embed-certs-227235
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-227235 -n embed-certs-227235: exit status 7 (100.993462ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-227235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (59.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-227235 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-227235 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (59.223651184s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-227235 -n embed-certs-227235
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (59.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-r2t97" [33d18258-aad7-48d7-8241-2a87b30a44f5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.008142023s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-r2t97" [33d18258-aad7-48d7-8241-2a87b30a44f5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004538037s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-570856 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-570856 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-570856 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-570856 -n default-k8s-diff-port-570856
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-570856 -n default-k8s-diff-port-570856: exit status 2 (373.803578ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-570856 -n default-k8s-diff-port-570856
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-570856 -n default-k8s-diff-port-570856: exit status 2 (503.453675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-570856 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-570856 -n default-k8s-diff-port-570856
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-570856 -n default-k8s-diff-port-570856
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5n4pl" [d7bf9513-454d-4121-936b-b5487ee22f32] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004138084s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-546032 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-546032 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m16.136484458s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5n4pl" [d7bf9513-454d-4121-936b-b5487ee22f32] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003882799s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-227235 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-227235 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-227235 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-227235 -n embed-certs-227235
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-227235 -n embed-certs-227235: exit status 2 (500.245459ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-227235 -n embed-certs-227235
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-227235 -n embed-certs-227235: exit status 2 (486.20023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-227235 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-227235 --alsologtostderr -v=1: (1.292400711s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-227235 -n embed-certs-227235
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-227235 -n embed-certs-227235
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-616827 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1119 22:41:43.421658    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:41:43.427948    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:41:43.439274    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:41:43.460620    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:41:43.501945    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:41:43.583289    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:41:43.745080    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:41:44.067112    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:41:44.709059    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:41:45.990590    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:41:48.008507    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:41:48.551880    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:41:53.673465    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:42:03.914698    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-616827 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (45.103825005s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-616827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-616827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.335839317s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-616827 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-616827 --alsologtostderr -v=3: (1.411289274s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-616827 -n newest-cni-616827
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-616827 -n newest-cni-616827: exit status 7 (76.444484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-616827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-616827 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-616827 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (17.669910715s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-616827 -n newest-cni-616827
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-616827 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-616827 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-616827 -n newest-cni-616827
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-616827 -n newest-cni-616827: exit status 2 (342.084602ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-616827 -n newest-cni-616827
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-616827 -n newest-cni-616827: exit status 2 (374.187498ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-616827 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-616827 -n newest-cni-616827
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-616827 -n newest-cni-616827
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (92.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-156590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-156590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m32.658197904s)
--- PASS: TestNetworkPlugins/group/auto/Start (92.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-546032 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-546032 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.287029122s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-546032 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-546032 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-546032 --alsologtostderr -v=3: (12.355181837s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-546032 -n no-preload-546032
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-546032 -n no-preload-546032: exit status 7 (91.919595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-546032 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (56.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-546032 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1119 22:43:05.358968    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:43:25.485619    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/functional-183559/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-546032 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (55.943598072s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-546032 -n no-preload-546032
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (56.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9lgn4" [40a78b0a-d6b1-46b3-a91c-6e39f5d1eca1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0038179s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9lgn4" [40a78b0a-d6b1-46b3-a91c-6e39f5d1eca1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.020370752s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-546032 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-546032 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-546032 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-546032 -n no-preload-546032
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-546032 -n no-preload-546032: exit status 2 (350.2475ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-546032 -n no-preload-546032
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-546032 -n no-preload-546032: exit status 2 (336.828836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-546032 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-546032 -n no-preload-546032
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-546032 -n no-preload-546032
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.14s)
E1119 22:49:50.384644    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/auto-156590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:49:56.636655    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-156590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-156590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m25.36201672s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-156590 "pgrep -a kubelet"
I1119 22:44:09.105843    4144 config.go:182] Loaded profile config "auto-156590": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-156590 replace --force -f testdata/netcat-deployment.yaml
I1119 22:44:09.438080    4144 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xv79c" [72ba17a8-27f6-4a8a-9d25-83dd3f9d25e3] Pending
helpers_test.go:352: "netcat-cd4db9dbf-xv79c" [72ba17a8-27f6-4a8a-9d25-83dd3f9d25e3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.007625772s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-156590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-156590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-156590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (62.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-156590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1119 22:44:49.429094    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:45:09.910356    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-156590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m2.757432947s)
--- PASS: TestNetworkPlugins/group/calico/Start (62.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-qnd7q" [a5ed4e70-b315-44cf-a1f0-9c64472854bf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005039256s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-156590 "pgrep -a kubelet"
I1119 22:45:39.044374    4144 config.go:182] Loaded profile config "kindnet-156590": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-156590 replace --force -f testdata/netcat-deployment.yaml
I1119 22:45:39.424972    4144 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5jdzf" [0015a9e7-3467-492d-ba85-9b3fa1053cf3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5jdzf" [0015a9e7-3467-492d-ba85-9b3fa1053cf3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.00433012s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-hcnj5" [9634c269-8ebc-433e-8c92-44351f1fadc1] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-hcnj5" [9634c269-8ebc-433e-8c92-44351f1fadc1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007206789s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-156590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-156590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-156590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-156590 "pgrep -a kubelet"
I1119 22:45:55.340322    4144 config.go:182] Loaded profile config "calico-156590": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-156590 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lgllx" [8f0452e2-a9f6-49ad-89c3-4525e62e923f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lgllx" [8f0452e2-a9f6-49ad-89c3-4525e62e923f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004790111s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-156590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-156590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-156590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-156590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-156590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m4.473294971s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-156590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1119 22:46:43.421871    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:46:48.008292    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/addons-030214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:47:11.121971    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/old-k8s-version-264160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:47:12.795071    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/default-k8s-diff-port-570856/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-156590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m21.508085648s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-156590 "pgrep -a kubelet"
I1119 22:47:18.203879    4144 config.go:182] Loaded profile config "custom-flannel-156590": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-156590 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-27n79" [4e099a28-cfa2-4018-9ecc-b40f9fc9894b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-27n79" [4e099a28-cfa2-4018-9ecc-b40f9fc9894b] Running
E1119 22:47:23.937180    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/no-preload-546032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:47:23.943631    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/no-preload-546032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:47:23.955080    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/no-preload-546032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:47:23.976548    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/no-preload-546032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:47:24.017971    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/no-preload-546032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:47:24.099447    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/no-preload-546032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:47:24.260985    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/no-preload-546032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:47:24.582275    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/no-preload-546032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:47:25.224376    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/no-preload-546032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:47:26.506436    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/no-preload-546032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003427687s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-156590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-156590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-156590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (69.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-156590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-156590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m9.297727973s)
--- PASS: TestNetworkPlugins/group/flannel/Start (69.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-156590 "pgrep -a kubelet"
I1119 22:47:54.531574    4144 config.go:182] Loaded profile config "enable-default-cni-156590": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-156590 replace --force -f testdata/netcat-deployment.yaml
I1119 22:47:54.912347    4144 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lhn7r" [bd87eb65-2b9d-4d5e-8f9f-daf0457c86be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lhn7r" [bd87eb65-2b9d-4d5e-8f9f-daf0457c86be] Running
E1119 22:48:04.915740    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/no-preload-546032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004652606s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-156590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-156590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-156590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (85.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-156590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1119 22:48:45.877742    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/no-preload-546032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-156590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m25.08567287s)
--- PASS: TestNetworkPlugins/group/bridge/Start (85.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-zvw5k" [b6d80868-c50c-46e5-975f-7359e1636696] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004293459s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-156590 "pgrep -a kubelet"
I1119 22:49:06.588258    4144 config.go:182] Loaded profile config "flannel-156590": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-156590 replace --force -f testdata/netcat-deployment.yaml
I1119 22:49:06.899911    4144 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ll9tn" [9e1b3db0-7b01-4aab-a421-bb450985ebb6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1119 22:49:09.409328    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/auto-156590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:49:09.415661    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/auto-156590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:49:09.426959    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/auto-156590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:49:09.448437    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/auto-156590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:49:09.489747    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/auto-156590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:49:09.571138    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/auto-156590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:49:09.732499    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/auto-156590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:49:10.054739    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/auto-156590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:49:10.697023    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/auto-156590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-ll9tn" [9e1b3db0-7b01-4aab-a421-bb450985ebb6] Running
E1119 22:49:11.978867    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/auto-156590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:49:14.540747    4144 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-2347/.minikube/profiles/auto-156590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003918084s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-156590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-156590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-156590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-156590 "pgrep -a kubelet"
I1119 22:49:57.514848    4144 config.go:182] Loaded profile config "bridge-156590": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-156590 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2r28n" [808cd3a7-0ffe-4b76-af9c-e678a858df74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2r28n" [808cd3a7-0ffe-4b76-af9c-e678a858df74] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003327366s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-156590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-156590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-156590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    

Test skip (30/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-531544 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-531544" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-531544
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-063316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-063316
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-156590 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-156590

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-156590

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-156590

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-156590

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-156590

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-156590

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-156590

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-156590

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-156590

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-156590

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-156590

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-156590" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-156590" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-156590

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-156590"

                                                
                                                
----------------------- debugLogs end: kubenet-156590 [took: 4.680055241s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-156590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-156590
--- SKIP: TestNetworkPlugins/group/kubenet (4.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-156590 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-156590

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-156590

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-156590

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-156590

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-156590

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-156590

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-156590

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-156590

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-156590

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-156590

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-156590

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-156590" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-156590

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-156590

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-156590

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-156590

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-156590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-156590" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-156590

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-156590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-156590"

                                                
                                                
----------------------- debugLogs end: cilium-156590 [took: 4.063135182s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-156590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-156590
--- SKIP: TestNetworkPlugins/group/cilium (4.26s)

                                                
                                    
Copied to clipboard