Test Report: Docker_Linux_containerd_arm64 22000

                    
                      3f3a61283993ee602bd323c44b704727ac3a4ece:2025-11-29:42558
                    
                

Test fail (4/333)

Order failed test Duration
303 TestStartStop/group/old-k8s-version/serial/DeployApp 18.69
308 TestStartStop/group/no-preload/serial/DeployApp 13.92
325 TestStartStop/group/embed-certs/serial/DeployApp 12.87
330 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 15.17
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (18.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-071895 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3abcbd08-d7c4-4a13-b94c-6f6424975411] Pending
helpers_test.go:352: "busybox" [3abcbd08-d7c4-4a13-b94c-6f6424975411] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3abcbd08-d7c4-4a13-b94c-6f6424975411] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.008889486s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-071895 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-071895
helpers_test.go:243: (dbg) docker inspect old-k8s-version-071895:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cb39490005387f20e45d85449d7cd3926a38c4f6954c93fdb4e9a9d8c1dd56c0",
	        "Created": "2025-11-29T09:19:35.843753446Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 219639,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:19:35.922684387Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/cb39490005387f20e45d85449d7cd3926a38c4f6954c93fdb4e9a9d8c1dd56c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cb39490005387f20e45d85449d7cd3926a38c4f6954c93fdb4e9a9d8c1dd56c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/cb39490005387f20e45d85449d7cd3926a38c4f6954c93fdb4e9a9d8c1dd56c0/hosts",
	        "LogPath": "/var/lib/docker/containers/cb39490005387f20e45d85449d7cd3926a38c4f6954c93fdb4e9a9d8c1dd56c0/cb39490005387f20e45d85449d7cd3926a38c4f6954c93fdb4e9a9d8c1dd56c0-json.log",
	        "Name": "/old-k8s-version-071895",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-071895:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-071895",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cb39490005387f20e45d85449d7cd3926a38c4f6954c93fdb4e9a9d8c1dd56c0",
	                "LowerDir": "/var/lib/docker/overlay2/39dddc1dab2647088ef22e0a22ddfff676f8c9bdc540988436a11252cc093aa5-init/diff:/var/lib/docker/overlay2/fc2ab0019906b90b3f033fa414f560878b73f7ff0ebdf77a0b554a40813009d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/39dddc1dab2647088ef22e0a22ddfff676f8c9bdc540988436a11252cc093aa5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/39dddc1dab2647088ef22e0a22ddfff676f8c9bdc540988436a11252cc093aa5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/39dddc1dab2647088ef22e0a22ddfff676f8c9bdc540988436a11252cc093aa5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-071895",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-071895/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-071895",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-071895",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-071895",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "60a614c2d74d8f721c5d191b45e8f8728a313afe9d5488b154acf3a0ac189fb9",
	            "SandboxKey": "/var/run/docker/netns/60a614c2d74d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-071895": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:be:6c:06:cc:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "46e34ec2f3d70587bfaede542f848856d8f0dbb2dcdc34fe102884ad13766b95",
	                    "EndpointID": "2663a5dbde2357e0d7269cf1f8d9d8bb11ffe6e49aa8754901238cb93acbbf02",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-071895",
	                        "cb3949000538"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-071895 -n old-k8s-version-071895
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-071895 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-071895 logs -n 25: (1.765314466s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-420729 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo containerd config dump                                                                                                                                                                                                        │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo crio config                                                                                                                                                                                                                   │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ delete  │ -p cilium-420729                                                                                                                                                                                                                                    │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ -p force-systemd-env-559836 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-559836 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:16 UTC │
	│ ssh     │ force-systemd-env-559836 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-559836 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ delete  │ -p force-systemd-env-559836                                                                                                                                                                                                                         │ force-systemd-env-559836 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ start   │ -p cert-expiration-592440 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-592440   │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ delete  │ -p running-upgrade-115889                                                                                                                                                                                                                           │ running-upgrade-115889   │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ start   │ -p cert-options-515442 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-515442      │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:19 UTC │
	│ ssh     │ cert-options-515442 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-515442      │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:19 UTC │
	│ ssh     │ -p cert-options-515442 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-515442      │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:19 UTC │
	│ delete  │ -p cert-options-515442                                                                                                                                                                                                                              │ cert-options-515442      │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:19 UTC │
	│ start   │ -p old-k8s-version-071895 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-071895   │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:20 UTC │
	│ start   │ -p cert-expiration-592440 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-592440   │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:20 UTC │
	│ delete  │ -p cert-expiration-592440                                                                                                                                                                                                                           │ cert-expiration-592440   │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:20 UTC │
	│ start   │ -p no-preload-230403 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-230403        │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:20:12
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:20:12.939624  222878 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:20:12.939853  222878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:20:12.939881  222878 out.go:374] Setting ErrFile to fd 2...
	I1129 09:20:12.939901  222878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:20:12.940241  222878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
	I1129 09:20:12.940820  222878 out.go:368] Setting JSON to false
	I1129 09:20:12.941892  222878 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3764,"bootTime":1764404249,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1129 09:20:12.941996  222878 start.go:143] virtualization:  
	I1129 09:20:12.947843  222878 out.go:179] * [no-preload-230403] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 09:20:12.951543  222878 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:20:12.951778  222878 notify.go:221] Checking for updates...
	I1129 09:20:12.959740  222878 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:20:12.963748  222878 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:20:12.967028  222878 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube
	I1129 09:20:12.970194  222878 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 09:20:12.973266  222878 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:20:12.976789  222878 config.go:182] Loaded profile config "old-k8s-version-071895": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:20:12.976879  222878 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:20:13.015916  222878 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 09:20:13.016116  222878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:20:13.089040  222878 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 09:20:13.078615429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:20:13.089149  222878 docker.go:319] overlay module found
	I1129 09:20:13.094585  222878 out.go:179] * Using the docker driver based on user configuration
	I1129 09:20:13.101060  222878 start.go:309] selected driver: docker
	I1129 09:20:13.101087  222878 start.go:927] validating driver "docker" against <nil>
	I1129 09:20:13.101110  222878 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:20:13.101860  222878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:20:13.162298  222878 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 09:20:13.152737541 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:20:13.162462  222878 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 09:20:13.162689  222878 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:20:13.165689  222878 out.go:179] * Using Docker driver with root privileges
	I1129 09:20:13.168555  222878 cni.go:84] Creating CNI manager for ""
	I1129 09:20:13.168702  222878 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:20:13.168717  222878 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:20:13.168799  222878 start.go:353] cluster config:
	{Name:no-preload-230403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-230403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:20:13.171944  222878 out.go:179] * Starting "no-preload-230403" primary control-plane node in "no-preload-230403" cluster
	I1129 09:20:13.174795  222878 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1129 09:20:13.177867  222878 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:20:13.180600  222878 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:20:13.180815  222878 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:20:13.180863  222878 cache.go:107] acquiring lock: {Name:mkc9ca05df03f187ae0239342774baa6ad8c9aea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:20:13.180958  222878 cache.go:107] acquiring lock: {Name:mk1a5c919477c9b6035d1da624b0b2445dbe0e73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:20:13.181026  222878 cache.go:115] /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1129 09:20:13.181043  222878 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 86.212µs
	I1129 09:20:13.181062  222878 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1129 09:20:13.181080  222878 cache.go:107] acquiring lock: {Name:mk74fc1ce0ee5a4f599a03d41c7dab91b2a2e933 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:20:13.181115  222878 cache.go:115] /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1129 09:20:13.181125  222878 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 46.598µs
	I1129 09:20:13.181131  222878 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1129 09:20:13.181141  222878 cache.go:107] acquiring lock: {Name:mk8695629c5903582c523a837d766d417499d914 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:20:13.181179  222878 cache.go:115] /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1129 09:20:13.181189  222878 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 49.445µs
	I1129 09:20:13.181196  222878 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1129 09:20:13.181205  222878 cache.go:107] acquiring lock: {Name:mk6962b4fc4c58f41448580e388a757daf8f6018 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:20:13.181239  222878 cache.go:115] /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1129 09:20:13.181249  222878 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 44.94µs
	I1129 09:20:13.181255  222878 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1129 09:20:13.181269  222878 cache.go:107] acquiring lock: {Name:mk75f52747e0531666c302459e925614b33b76b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:20:13.181314  222878 cache.go:115] /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1129 09:20:13.181323  222878 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 55.639µs
	I1129 09:20:13.181332  222878 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1129 09:20:13.181345  222878 cache.go:107] acquiring lock: {Name:mke59d5887f27460b7717e6fa1d7c7be222b2ad7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:20:13.181380  222878 cache.go:115] /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1129 09:20:13.181391  222878 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 46.433µs
	I1129 09:20:13.181396  222878 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1129 09:20:13.181409  222878 cache.go:107] acquiring lock: {Name:mkece740ade6508db73b1e245e73f976785e2996 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:20:13.181442  222878 cache.go:115] /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1129 09:20:13.181450  222878 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 45.654µs
	I1129 09:20:13.181455  222878 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1129 09:20:13.181552  222878 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/config.json ...
	I1129 09:20:13.181573  222878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/config.json: {Name:mkedfced3d2b7fa7d1f9faae9aecd4cdc6897bf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:20:13.181779  222878 cache.go:115] /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1129 09:20:13.181796  222878 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 942.365µs
	I1129 09:20:13.181804  222878 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1129 09:20:13.181857  222878 cache.go:87] Successfully saved all images to host disk.
	I1129 09:20:13.201388  222878 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:20:13.201410  222878 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:20:13.201431  222878 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:20:13.201462  222878 start.go:360] acquireMachinesLock for no-preload-230403: {Name:mk2a91c20925489376678f93ce44b3d1de57601f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:20:13.201622  222878 start.go:364] duration metric: took 139.242µs to acquireMachinesLock for "no-preload-230403"
	I1129 09:20:13.201663  222878 start.go:93] Provisioning new machine with config: &{Name:no-preload-230403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-230403 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:20:13.201746  222878 start.go:125] createHost starting for "" (driver="docker")
	I1129 09:20:09.378511  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:09.878391  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:10.379008  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:10.879016  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:11.378477  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:11.879067  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:12.378498  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:12.878370  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:13.378426  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:13.879213  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:14.378760  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:14.880612  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:15.379061  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:15.530412  219229 kubeadm.go:1114] duration metric: took 11.369681639s to wait for elevateKubeSystemPrivileges
	I1129 09:20:15.530446  219229 kubeadm.go:403] duration metric: took 31.525981112s to StartCluster
	I1129 09:20:15.530463  219229 settings.go:142] acquiring lock: {Name:mk44917d1324740eeda65bf3aa312ad1561d3ed4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:20:15.530529  219229 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:20:15.531211  219229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/kubeconfig: {Name:mk3c09eb9158ba85342a695b6ac4b1a5f69e1b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:20:15.531425  219229 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:20:15.531520  219229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 09:20:15.531760  219229 config.go:182] Loaded profile config "old-k8s-version-071895": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:20:15.531752  219229 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:20:15.531869  219229 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-071895"
	I1129 09:20:15.531886  219229 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-071895"
	I1129 09:20:15.531914  219229 host.go:66] Checking if "old-k8s-version-071895" exists ...
	I1129 09:20:15.532442  219229 cli_runner.go:164] Run: docker container inspect old-k8s-version-071895 --format={{.State.Status}}
	I1129 09:20:15.532702  219229 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-071895"
	I1129 09:20:15.532736  219229 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-071895"
	I1129 09:20:15.533094  219229 cli_runner.go:164] Run: docker container inspect old-k8s-version-071895 --format={{.State.Status}}
	I1129 09:20:15.536113  219229 out.go:179] * Verifying Kubernetes components...
	I1129 09:20:15.539443  219229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:20:15.574128  219229 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-071895"
	I1129 09:20:15.574169  219229 host.go:66] Checking if "old-k8s-version-071895" exists ...
	I1129 09:20:15.574614  219229 cli_runner.go:164] Run: docker container inspect old-k8s-version-071895 --format={{.State.Status}}
	I1129 09:20:15.575661  219229 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:20:15.578616  219229 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:20:15.578636  219229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:20:15.578703  219229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:20:15.596399  219229 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:20:15.596427  219229 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:20:15.596503  219229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:20:15.630157  219229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:20:15.639128  219229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:20:15.896152  219229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:20:15.896336  219229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 09:20:16.015161  219229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:20:16.026843  219229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:20:17.194520  219229 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.298139458s)
	I1129 09:20:17.194560  219229 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1129 09:20:17.195641  219229 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.299459942s)
	I1129 09:20:17.196336  219229 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-071895" to be "Ready" ...
	I1129 09:20:17.598641  219229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.583439516s)
	I1129 09:20:17.598752  219229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.571873758s)
	I1129 09:20:17.633446  219229 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1129 09:20:13.207006  222878 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 09:20:13.207293  222878 start.go:159] libmachine.API.Create for "no-preload-230403" (driver="docker")
	I1129 09:20:13.207340  222878 client.go:173] LocalClient.Create starting
	I1129 09:20:13.207488  222878 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem
	I1129 09:20:13.207529  222878 main.go:143] libmachine: Decoding PEM data...
	I1129 09:20:13.207573  222878 main.go:143] libmachine: Parsing certificate...
	I1129 09:20:13.207655  222878 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem
	I1129 09:20:13.207690  222878 main.go:143] libmachine: Decoding PEM data...
	I1129 09:20:13.207710  222878 main.go:143] libmachine: Parsing certificate...
	I1129 09:20:13.208128  222878 cli_runner.go:164] Run: docker network inspect no-preload-230403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:20:13.227770  222878 cli_runner.go:211] docker network inspect no-preload-230403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:20:13.227856  222878 network_create.go:284] running [docker network inspect no-preload-230403] to gather additional debugging logs...
	I1129 09:20:13.227880  222878 cli_runner.go:164] Run: docker network inspect no-preload-230403
	W1129 09:20:13.250504  222878 cli_runner.go:211] docker network inspect no-preload-230403 returned with exit code 1
	I1129 09:20:13.250537  222878 network_create.go:287] error running [docker network inspect no-preload-230403]: docker network inspect no-preload-230403: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-230403 not found
	I1129 09:20:13.250551  222878 network_create.go:289] output of [docker network inspect no-preload-230403]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-230403 not found
	
	** /stderr **
	I1129 09:20:13.250655  222878 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:20:13.269213  222878 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8664e809540f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:5a:a5:48:89:fb} reservation:<nil>}
	I1129 09:20:13.269665  222878 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fe5a1fed3d29 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8e:0c:ca:69:14:77} reservation:<nil>}
	I1129 09:20:13.270007  222878 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c3b36bc67c6b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:2d:06:dd:2d:03} reservation:<nil>}
	I1129 09:20:13.270333  222878 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-46e34ec2f3d7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7a:63:b9:c9:b8:a0} reservation:<nil>}
	I1129 09:20:13.270853  222878 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a000e0}
	I1129 09:20:13.270885  222878 network_create.go:124] attempt to create docker network no-preload-230403 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1129 09:20:13.270944  222878 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-230403 no-preload-230403
	I1129 09:20:13.339116  222878 network_create.go:108] docker network no-preload-230403 192.168.85.0/24 created
	I1129 09:20:13.339148  222878 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-230403" container
	I1129 09:20:13.339222  222878 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:20:13.358931  222878 cli_runner.go:164] Run: docker volume create no-preload-230403 --label name.minikube.sigs.k8s.io=no-preload-230403 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:20:13.376848  222878 oci.go:103] Successfully created a docker volume no-preload-230403
	I1129 09:20:13.376977  222878 cli_runner.go:164] Run: docker run --rm --name no-preload-230403-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-230403 --entrypoint /usr/bin/test -v no-preload-230403:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:20:13.960824  222878 oci.go:107] Successfully prepared a docker volume no-preload-230403
	I1129 09:20:13.960886  222878 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	W1129 09:20:13.961020  222878 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1129 09:20:13.961137  222878 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:20:14.052602  222878 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-230403 --name no-preload-230403 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-230403 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-230403 --network no-preload-230403 --ip 192.168.85.2 --volume no-preload-230403:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:20:14.434508  222878 cli_runner.go:164] Run: docker container inspect no-preload-230403 --format={{.State.Running}}
	I1129 09:20:14.469095  222878 cli_runner.go:164] Run: docker container inspect no-preload-230403 --format={{.State.Status}}
	I1129 09:20:14.505837  222878 cli_runner.go:164] Run: docker exec no-preload-230403 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:20:14.574820  222878 oci.go:144] the created container "no-preload-230403" has a running status.
	I1129 09:20:14.574847  222878 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-2317/.minikube/machines/no-preload-230403/id_rsa...
	I1129 09:20:14.765899  222878 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-2317/.minikube/machines/no-preload-230403/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:20:14.803197  222878 cli_runner.go:164] Run: docker container inspect no-preload-230403 --format={{.State.Status}}
	I1129 09:20:14.838341  222878 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:20:14.838366  222878 kic_runner.go:114] Args: [docker exec --privileged no-preload-230403 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:20:14.971747  222878 cli_runner.go:164] Run: docker container inspect no-preload-230403 --format={{.State.Status}}
	I1129 09:20:14.997195  222878 machine.go:94] provisionDockerMachine start ...
	I1129 09:20:14.997331  222878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-230403
	I1129 09:20:15.036227  222878 main.go:143] libmachine: Using SSH client type: native
	I1129 09:20:15.036638  222878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1129 09:20:15.036651  222878 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:20:15.042876  222878 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1129 09:20:17.636479  219229 addons.go:530] duration metric: took 2.104720222s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 09:20:17.699584  219229 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-071895" context rescaled to 1 replicas
	W1129 09:20:19.201224  219229 node_ready.go:57] node "old-k8s-version-071895" has "Ready":"False" status (will retry)
	I1129 09:20:18.208511  222878 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-230403
	
	I1129 09:20:18.208576  222878 ubuntu.go:182] provisioning hostname "no-preload-230403"
	I1129 09:20:18.208750  222878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-230403
	I1129 09:20:18.231955  222878 main.go:143] libmachine: Using SSH client type: native
	I1129 09:20:18.232303  222878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1129 09:20:18.232314  222878 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-230403 && echo "no-preload-230403" | sudo tee /etc/hostname
	I1129 09:20:18.417308  222878 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-230403
	
	I1129 09:20:18.417502  222878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-230403
	I1129 09:20:18.446833  222878 main.go:143] libmachine: Using SSH client type: native
	I1129 09:20:18.447196  222878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1129 09:20:18.447217  222878 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-230403' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-230403/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-230403' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:20:18.609294  222878 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:20:18.609323  222878 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-2317/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-2317/.minikube}
	I1129 09:20:18.609357  222878 ubuntu.go:190] setting up certificates
	I1129 09:20:18.609367  222878 provision.go:84] configureAuth start
	I1129 09:20:18.609424  222878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-230403
	I1129 09:20:18.633658  222878 provision.go:143] copyHostCerts
	I1129 09:20:18.633724  222878 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem, removing ...
	I1129 09:20:18.633733  222878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem
	I1129 09:20:18.633804  222878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem (1082 bytes)
	I1129 09:20:18.633884  222878 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem, removing ...
	I1129 09:20:18.633890  222878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem
	I1129 09:20:18.633917  222878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem (1123 bytes)
	I1129 09:20:18.633975  222878 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem, removing ...
	I1129 09:20:18.633979  222878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem
	I1129 09:20:18.634022  222878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem (1679 bytes)
	I1129 09:20:18.634072  222878 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem org=jenkins.no-preload-230403 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-230403]
	I1129 09:20:18.830643  222878 provision.go:177] copyRemoteCerts
	I1129 09:20:18.830732  222878 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:20:18.830804  222878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-230403
	I1129 09:20:18.849046  222878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/no-preload-230403/id_rsa Username:docker}
	I1129 09:20:18.957503  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 09:20:18.982683  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:20:19.017142  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 09:20:19.036354  222878 provision.go:87] duration metric: took 426.964935ms to configureAuth
	I1129 09:20:19.036391  222878 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:20:19.036594  222878 config.go:182] Loaded profile config "no-preload-230403": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:20:19.036608  222878 machine.go:97] duration metric: took 4.039383275s to provisionDockerMachine
	I1129 09:20:19.036705  222878 client.go:176] duration metric: took 5.829342348s to LocalClient.Create
	I1129 09:20:19.036723  222878 start.go:167] duration metric: took 5.829433418s to libmachine.API.Create "no-preload-230403"
	I1129 09:20:19.036733  222878 start.go:293] postStartSetup for "no-preload-230403" (driver="docker")
	I1129 09:20:19.036744  222878 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:20:19.036810  222878 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:20:19.036863  222878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-230403
	I1129 09:20:19.054558  222878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/no-preload-230403/id_rsa Username:docker}
	I1129 09:20:19.161154  222878 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:20:19.165056  222878 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:20:19.165086  222878 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:20:19.165116  222878 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-2317/.minikube/addons for local assets ...
	I1129 09:20:19.165196  222878 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-2317/.minikube/files for local assets ...
	I1129 09:20:19.165294  222878 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem -> 41372.pem in /etc/ssl/certs
	I1129 09:20:19.165459  222878 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:20:19.175008  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem --> /etc/ssl/certs/41372.pem (1708 bytes)
	I1129 09:20:19.202166  222878 start.go:296] duration metric: took 165.419871ms for postStartSetup
	I1129 09:20:19.202535  222878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-230403
	I1129 09:20:19.222107  222878 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/config.json ...
	I1129 09:20:19.222396  222878 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:20:19.222436  222878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-230403
	I1129 09:20:19.240201  222878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/no-preload-230403/id_rsa Username:docker}
	I1129 09:20:19.346358  222878 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:20:19.351907  222878 start.go:128] duration metric: took 6.150146246s to createHost
	I1129 09:20:19.351975  222878 start.go:83] releasing machines lock for "no-preload-230403", held for 6.150337057s
	I1129 09:20:19.352082  222878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-230403
	I1129 09:20:19.369647  222878 ssh_runner.go:195] Run: cat /version.json
	I1129 09:20:19.369701  222878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-230403
	I1129 09:20:19.369794  222878 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:20:19.369854  222878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-230403
	I1129 09:20:19.412764  222878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/no-preload-230403/id_rsa Username:docker}
	I1129 09:20:19.422423  222878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/no-preload-230403/id_rsa Username:docker}
	I1129 09:20:19.618519  222878 ssh_runner.go:195] Run: systemctl --version
	I1129 09:20:19.626187  222878 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:20:19.630590  222878 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:20:19.630681  222878 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:20:19.659536  222878 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1129 09:20:19.659559  222878 start.go:496] detecting cgroup driver to use...
	I1129 09:20:19.659594  222878 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 09:20:19.659644  222878 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1129 09:20:19.675641  222878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1129 09:20:19.690722  222878 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:20:19.690795  222878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:20:19.710602  222878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:20:19.735104  222878 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:20:19.862098  222878 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:20:20.020548  222878 docker.go:234] disabling docker service ...
	I1129 09:20:20.020764  222878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:20:20.049579  222878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:20:20.066560  222878 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:20:20.195869  222878 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:20:20.317681  222878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:20:20.332092  222878 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:20:20.348128  222878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1129 09:20:20.359261  222878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1129 09:20:20.369657  222878 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1129 09:20:20.369726  222878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1129 09:20:20.379235  222878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:20:20.388089  222878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1129 09:20:20.397442  222878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:20:20.406391  222878 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:20:20.414674  222878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1129 09:20:20.423896  222878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1129 09:20:20.432684  222878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1129 09:20:20.441584  222878 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:20:20.449626  222878 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:20:20.458580  222878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:20:20.578649  222878 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1129 09:20:20.669910  222878 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1129 09:20:20.670001  222878 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1129 09:20:20.674049  222878 start.go:564] Will wait 60s for crictl version
	I1129 09:20:20.674121  222878 ssh_runner.go:195] Run: which crictl
	I1129 09:20:20.677882  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:20:20.711552  222878 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1129 09:20:20.711620  222878 ssh_runner.go:195] Run: containerd --version
	I1129 09:20:20.734338  222878 ssh_runner.go:195] Run: containerd --version
	I1129 09:20:20.760452  222878 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1129 09:20:20.763394  222878 cli_runner.go:164] Run: docker network inspect no-preload-230403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:20:20.779886  222878 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1129 09:20:20.783617  222878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:20:20.793588  222878 kubeadm.go:884] updating cluster {Name:no-preload-230403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-230403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:20:20.793740  222878 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:20:20.793820  222878 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:20:20.818996  222878 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1129 09:20:20.819021  222878 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1129 09:20:20.819075  222878 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:20:20.819290  222878 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:20:20.819377  222878 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:20:20.819472  222878 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:20:20.819580  222878 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:20:20.819670  222878 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1129 09:20:20.819757  222878 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1129 09:20:20.819836  222878 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:20:20.820993  222878 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:20:20.821570  222878 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:20:20.821829  222878 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:20:20.821983  222878 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:20:20.822235  222878 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1129 09:20:20.822385  222878 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:20:20.822667  222878 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1129 09:20:20.823079  222878 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:20:21.122603  222878 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1129 09:20:21.122681  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1129 09:20:21.142272  222878 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a"
	I1129 09:20:21.142372  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:20:21.156765  222878 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0"
	I1129 09:20:21.156842  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:20:21.158253  222878 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196"
	I1129 09:20:21.158318  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:20:21.159304  222878 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc"
	I1129 09:20:21.159366  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:20:21.163083  222878 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e"
	I1129 09:20:21.163151  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1129 09:20:21.163275  222878 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9"
	I1129 09:20:21.163342  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:20:21.165618  222878 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1129 09:20:21.165704  222878 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1129 09:20:21.165791  222878 ssh_runner.go:195] Run: which crictl
	I1129 09:20:21.179345  222878 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1129 09:20:21.179432  222878 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:20:21.179520  222878 ssh_runner.go:195] Run: which crictl
	I1129 09:20:21.225665  222878 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1129 09:20:21.225755  222878 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:20:21.225854  222878 ssh_runner.go:195] Run: which crictl
	I1129 09:20:21.225939  222878 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1129 09:20:21.225991  222878 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:20:21.226032  222878 ssh_runner.go:195] Run: which crictl
	I1129 09:20:21.226126  222878 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1129 09:20:21.226162  222878 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:20:21.226209  222878 ssh_runner.go:195] Run: which crictl
	I1129 09:20:21.237496  222878 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1129 09:20:21.237581  222878 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1129 09:20:21.237665  222878 ssh_runner.go:195] Run: which crictl
	I1129 09:20:21.239070  222878 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1129 09:20:21.239288  222878 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:20:21.239346  222878 ssh_runner.go:195] Run: which crictl
	I1129 09:20:21.239286  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:20:21.239244  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 09:20:21.240343  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:20:21.240430  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:20:21.240578  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:20:21.248302  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 09:20:21.337972  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:20:21.338141  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 09:20:21.338156  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:20:21.350334  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:20:21.350500  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:20:21.350586  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:20:21.350679  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 09:20:21.436779  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:20:21.436931  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:20:21.437008  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 09:20:21.482969  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:20:21.483085  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:20:21.483137  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:20:21.491181  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 09:20:21.551573  222878 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1129 09:20:21.551783  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1129 09:20:21.551782  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:20:21.551677  222878 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1129 09:20:21.551991  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 09:20:21.589991  222878 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1129 09:20:21.590095  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1129 09:20:21.590176  222878 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1129 09:20:21.590233  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 09:20:21.590311  222878 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1129 09:20:21.590381  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 09:20:21.599084  222878 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1129 09:20:21.599203  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1129 09:20:21.606906  222878 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1129 09:20:21.607120  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1129 09:20:21.607120  222878 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1129 09:20:21.607245  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1129 09:20:21.607065  222878 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1129 09:20:21.607080  222878 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1129 09:20:21.607377  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1129 09:20:21.607089  222878 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1129 09:20:21.607470  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1129 09:20:21.607010  222878 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1129 09:20:21.607558  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1129 09:20:21.607693  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 09:20:21.611409  222878 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1129 09:20:21.611475  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1129 09:20:21.621627  222878 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1129 09:20:21.621809  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1129 09:20:21.715246  222878 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1129 09:20:21.715371  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1129 09:20:22.049743  222878 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1129 09:20:22.146786  222878 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 09:20:22.146909  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	W1129 09:20:22.239238  222878 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1129 09:20:22.239372  222878 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1129 09:20:22.239461  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	W1129 09:20:21.201342  219229 node_ready.go:57] node "old-k8s-version-071895" has "Ready":"False" status (will retry)
	W1129 09:20:23.202246  219229 node_ready.go:57] node "old-k8s-version-071895" has "Ready":"False" status (will retry)
	I1129 09:20:23.813839  222878 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.666881209s)
	I1129 09:20:23.813866  222878 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1129 09:20:23.813884  222878 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1129 09:20:23.813934  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1129 09:20:23.813990  222878 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.574510089s)
	I1129 09:20:23.814059  222878 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1129 09:20:23.814109  222878 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:20:23.814162  222878 ssh_runner.go:195] Run: which crictl
	I1129 09:20:25.262220  222878 ssh_runner.go:235] Completed: which crictl: (1.448029919s)
	I1129 09:20:25.262315  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:20:25.262227  222878 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.44826357s)
	I1129 09:20:25.262380  222878 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1129 09:20:25.262400  222878 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 09:20:25.262443  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 09:20:26.253409  222878 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1129 09:20:26.253448  222878 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 09:20:26.253502  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 09:20:26.253588  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:20:27.306910  222878 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.053379529s)
	I1129 09:20:27.306932  222878 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1129 09:20:27.306934  222878 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.053324259s)
	I1129 09:20:27.306948  222878 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 09:20:27.306998  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:20:27.306998  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 09:20:27.339643  222878 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1129 09:20:27.339756  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	W1129 09:20:25.701399  219229 node_ready.go:57] node "old-k8s-version-071895" has "Ready":"False" status (will retry)
	W1129 09:20:28.200255  219229 node_ready.go:57] node "old-k8s-version-071895" has "Ready":"False" status (will retry)
	I1129 09:20:29.701513  219229 node_ready.go:49] node "old-k8s-version-071895" is "Ready"
	I1129 09:20:29.701545  219229 node_ready.go:38] duration metric: took 12.504000526s for node "old-k8s-version-071895" to be "Ready" ...
	I1129 09:20:29.701560  219229 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:20:29.701622  219229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:20:29.719485  219229 api_server.go:72] duration metric: took 14.188022937s to wait for apiserver process to appear ...
	I1129 09:20:29.719511  219229 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:20:29.719530  219229 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:20:29.736520  219229 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 09:20:29.740376  219229 api_server.go:141] control plane version: v1.28.0
	I1129 09:20:29.740411  219229 api_server.go:131] duration metric: took 20.892436ms to wait for apiserver health ...
	I1129 09:20:29.740421  219229 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:20:29.748136  219229 system_pods.go:59] 8 kube-system pods found
	I1129 09:20:29.748178  219229 system_pods.go:61] "coredns-5dd5756b68-htmzr" [c6b5f2ee-df4f-40a3-be2e-6f16e858e497] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:20:29.748186  219229 system_pods.go:61] "etcd-old-k8s-version-071895" [79f6e3b1-4d0e-480f-ba81-e9c28edc83ac] Running
	I1129 09:20:29.748192  219229 system_pods.go:61] "kindnet-58g5f" [d4743cee-0834-4a44-9cf7-d0228aa5b843] Running
	I1129 09:20:29.748201  219229 system_pods.go:61] "kube-apiserver-old-k8s-version-071895" [81748b80-7ec0-4a82-b646-673534a05137] Running
	I1129 09:20:29.748206  219229 system_pods.go:61] "kube-controller-manager-old-k8s-version-071895" [b6691622-dfbd-4b77-bedd-c7a97120a360] Running
	I1129 09:20:29.748209  219229 system_pods.go:61] "kube-proxy-4jxrn" [3e4bdb82-85e5-468b-80dc-0481c990f117] Running
	I1129 09:20:29.748213  219229 system_pods.go:61] "kube-scheduler-old-k8s-version-071895" [fe7f98e1-0743-41d8-869a-4807c081f621] Running
	I1129 09:20:29.748219  219229 system_pods.go:61] "storage-provisioner" [784fe707-ae15-4eae-a70c-ec084ce3d812] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:20:29.748231  219229 system_pods.go:74] duration metric: took 7.804151ms to wait for pod list to return data ...
	I1129 09:20:29.748241  219229 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:20:29.751107  219229 default_sa.go:45] found service account: "default"
	I1129 09:20:29.751135  219229 default_sa.go:55] duration metric: took 2.887312ms for default service account to be created ...
	I1129 09:20:29.751147  219229 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:20:29.757754  219229 system_pods.go:86] 8 kube-system pods found
	I1129 09:20:29.757797  219229 system_pods.go:89] "coredns-5dd5756b68-htmzr" [c6b5f2ee-df4f-40a3-be2e-6f16e858e497] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:20:29.757804  219229 system_pods.go:89] "etcd-old-k8s-version-071895" [79f6e3b1-4d0e-480f-ba81-e9c28edc83ac] Running
	I1129 09:20:29.757810  219229 system_pods.go:89] "kindnet-58g5f" [d4743cee-0834-4a44-9cf7-d0228aa5b843] Running
	I1129 09:20:29.757815  219229 system_pods.go:89] "kube-apiserver-old-k8s-version-071895" [81748b80-7ec0-4a82-b646-673534a05137] Running
	I1129 09:20:29.757819  219229 system_pods.go:89] "kube-controller-manager-old-k8s-version-071895" [b6691622-dfbd-4b77-bedd-c7a97120a360] Running
	I1129 09:20:29.757823  219229 system_pods.go:89] "kube-proxy-4jxrn" [3e4bdb82-85e5-468b-80dc-0481c990f117] Running
	I1129 09:20:29.757827  219229 system_pods.go:89] "kube-scheduler-old-k8s-version-071895" [fe7f98e1-0743-41d8-869a-4807c081f621] Running
	I1129 09:20:29.757833  219229 system_pods.go:89] "storage-provisioner" [784fe707-ae15-4eae-a70c-ec084ce3d812] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:20:29.757863  219229 retry.go:31] will retry after 212.604223ms: missing components: kube-dns
	I1129 09:20:29.976302  219229 system_pods.go:86] 8 kube-system pods found
	I1129 09:20:29.976339  219229 system_pods.go:89] "coredns-5dd5756b68-htmzr" [c6b5f2ee-df4f-40a3-be2e-6f16e858e497] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:20:29.976347  219229 system_pods.go:89] "etcd-old-k8s-version-071895" [79f6e3b1-4d0e-480f-ba81-e9c28edc83ac] Running
	I1129 09:20:29.976353  219229 system_pods.go:89] "kindnet-58g5f" [d4743cee-0834-4a44-9cf7-d0228aa5b843] Running
	I1129 09:20:29.976359  219229 system_pods.go:89] "kube-apiserver-old-k8s-version-071895" [81748b80-7ec0-4a82-b646-673534a05137] Running
	I1129 09:20:29.976364  219229 system_pods.go:89] "kube-controller-manager-old-k8s-version-071895" [b6691622-dfbd-4b77-bedd-c7a97120a360] Running
	I1129 09:20:29.976368  219229 system_pods.go:89] "kube-proxy-4jxrn" [3e4bdb82-85e5-468b-80dc-0481c990f117] Running
	I1129 09:20:29.976373  219229 system_pods.go:89] "kube-scheduler-old-k8s-version-071895" [fe7f98e1-0743-41d8-869a-4807c081f621] Running
	I1129 09:20:29.976379  219229 system_pods.go:89] "storage-provisioner" [784fe707-ae15-4eae-a70c-ec084ce3d812] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:20:29.976398  219229 retry.go:31] will retry after 279.278138ms: missing components: kube-dns
	I1129 09:20:30.268579  219229 system_pods.go:86] 8 kube-system pods found
	I1129 09:20:30.268774  219229 system_pods.go:89] "coredns-5dd5756b68-htmzr" [c6b5f2ee-df4f-40a3-be2e-6f16e858e497] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:20:30.268790  219229 system_pods.go:89] "etcd-old-k8s-version-071895" [79f6e3b1-4d0e-480f-ba81-e9c28edc83ac] Running
	I1129 09:20:30.268797  219229 system_pods.go:89] "kindnet-58g5f" [d4743cee-0834-4a44-9cf7-d0228aa5b843] Running
	I1129 09:20:30.268802  219229 system_pods.go:89] "kube-apiserver-old-k8s-version-071895" [81748b80-7ec0-4a82-b646-673534a05137] Running
	I1129 09:20:30.268807  219229 system_pods.go:89] "kube-controller-manager-old-k8s-version-071895" [b6691622-dfbd-4b77-bedd-c7a97120a360] Running
	I1129 09:20:30.268811  219229 system_pods.go:89] "kube-proxy-4jxrn" [3e4bdb82-85e5-468b-80dc-0481c990f117] Running
	I1129 09:20:30.268816  219229 system_pods.go:89] "kube-scheduler-old-k8s-version-071895" [fe7f98e1-0743-41d8-869a-4807c081f621] Running
	I1129 09:20:30.268826  219229 system_pods.go:89] "storage-provisioner" [784fe707-ae15-4eae-a70c-ec084ce3d812] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:20:30.268843  219229 retry.go:31] will retry after 368.451427ms: missing components: kube-dns
	I1129 09:20:30.642681  219229 system_pods.go:86] 8 kube-system pods found
	I1129 09:20:30.642718  219229 system_pods.go:89] "coredns-5dd5756b68-htmzr" [c6b5f2ee-df4f-40a3-be2e-6f16e858e497] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:20:30.642726  219229 system_pods.go:89] "etcd-old-k8s-version-071895" [79f6e3b1-4d0e-480f-ba81-e9c28edc83ac] Running
	I1129 09:20:30.642733  219229 system_pods.go:89] "kindnet-58g5f" [d4743cee-0834-4a44-9cf7-d0228aa5b843] Running
	I1129 09:20:30.642738  219229 system_pods.go:89] "kube-apiserver-old-k8s-version-071895" [81748b80-7ec0-4a82-b646-673534a05137] Running
	I1129 09:20:30.642743  219229 system_pods.go:89] "kube-controller-manager-old-k8s-version-071895" [b6691622-dfbd-4b77-bedd-c7a97120a360] Running
	I1129 09:20:30.642747  219229 system_pods.go:89] "kube-proxy-4jxrn" [3e4bdb82-85e5-468b-80dc-0481c990f117] Running
	I1129 09:20:30.642752  219229 system_pods.go:89] "kube-scheduler-old-k8s-version-071895" [fe7f98e1-0743-41d8-869a-4807c081f621] Running
	I1129 09:20:30.642761  219229 system_pods.go:89] "storage-provisioner" [784fe707-ae15-4eae-a70c-ec084ce3d812] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:20:30.642776  219229 retry.go:31] will retry after 521.296683ms: missing components: kube-dns
	I1129 09:20:31.171413  219229 system_pods.go:86] 8 kube-system pods found
	I1129 09:20:31.171442  219229 system_pods.go:89] "coredns-5dd5756b68-htmzr" [c6b5f2ee-df4f-40a3-be2e-6f16e858e497] Running
	I1129 09:20:31.171449  219229 system_pods.go:89] "etcd-old-k8s-version-071895" [79f6e3b1-4d0e-480f-ba81-e9c28edc83ac] Running
	I1129 09:20:31.171454  219229 system_pods.go:89] "kindnet-58g5f" [d4743cee-0834-4a44-9cf7-d0228aa5b843] Running
	I1129 09:20:31.171472  219229 system_pods.go:89] "kube-apiserver-old-k8s-version-071895" [81748b80-7ec0-4a82-b646-673534a05137] Running
	I1129 09:20:31.171482  219229 system_pods.go:89] "kube-controller-manager-old-k8s-version-071895" [b6691622-dfbd-4b77-bedd-c7a97120a360] Running
	I1129 09:20:31.171487  219229 system_pods.go:89] "kube-proxy-4jxrn" [3e4bdb82-85e5-468b-80dc-0481c990f117] Running
	I1129 09:20:31.171502  219229 system_pods.go:89] "kube-scheduler-old-k8s-version-071895" [fe7f98e1-0743-41d8-869a-4807c081f621] Running
	I1129 09:20:31.171506  219229 system_pods.go:89] "storage-provisioner" [784fe707-ae15-4eae-a70c-ec084ce3d812] Running
	I1129 09:20:31.171514  219229 system_pods.go:126] duration metric: took 1.420361927s to wait for k8s-apps to be running ...
	I1129 09:20:31.171522  219229 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:20:31.171578  219229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:20:31.191104  219229 system_svc.go:56] duration metric: took 19.570105ms WaitForService to wait for kubelet
	I1129 09:20:31.191198  219229 kubeadm.go:587] duration metric: took 15.659726511s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:20:31.191233  219229 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:20:31.194404  219229 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 09:20:31.194485  219229 node_conditions.go:123] node cpu capacity is 2
	I1129 09:20:31.194514  219229 node_conditions.go:105] duration metric: took 3.245952ms to run NodePressure ...
	I1129 09:20:31.194558  219229 start.go:242] waiting for startup goroutines ...
	I1129 09:20:31.194583  219229 start.go:247] waiting for cluster config update ...
	I1129 09:20:31.194611  219229 start.go:256] writing updated cluster config ...
	I1129 09:20:31.195146  219229 ssh_runner.go:195] Run: rm -f paused
	I1129 09:20:31.201208  219229 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:20:31.206616  219229 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-htmzr" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:31.217168  219229 pod_ready.go:94] pod "coredns-5dd5756b68-htmzr" is "Ready"
	I1129 09:20:31.217243  219229 pod_ready.go:86] duration metric: took 10.548708ms for pod "coredns-5dd5756b68-htmzr" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:31.223645  219229 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-071895" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:31.234784  219229 pod_ready.go:94] pod "etcd-old-k8s-version-071895" is "Ready"
	I1129 09:20:31.234859  219229 pod_ready.go:86] duration metric: took 11.131317ms for pod "etcd-old-k8s-version-071895" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:31.248582  219229 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-071895" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:31.259407  219229 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-071895" is "Ready"
	I1129 09:20:31.259482  219229 pod_ready.go:86] duration metric: took 10.819537ms for pod "kube-apiserver-old-k8s-version-071895" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:31.263998  219229 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-071895" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:31.606531  219229 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-071895" is "Ready"
	I1129 09:20:31.606610  219229 pod_ready.go:86] duration metric: took 342.539937ms for pod "kube-controller-manager-old-k8s-version-071895" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:31.808005  219229 pod_ready.go:83] waiting for pod "kube-proxy-4jxrn" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:32.206161  219229 pod_ready.go:94] pod "kube-proxy-4jxrn" is "Ready"
	I1129 09:20:32.206190  219229 pod_ready.go:86] duration metric: took 398.137324ms for pod "kube-proxy-4jxrn" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:32.422404  219229 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-071895" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:32.806577  219229 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-071895" is "Ready"
	I1129 09:20:32.806676  219229 pod_ready.go:86] duration metric: took 384.18875ms for pod "kube-scheduler-old-k8s-version-071895" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:32.806706  219229 pod_ready.go:40] duration metric: took 1.605412666s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:20:32.883122  219229 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1129 09:20:32.886925  219229 out.go:203] 
	W1129 09:20:32.889873  219229 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1129 09:20:32.892945  219229 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1129 09:20:32.896883  219229 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-071895" cluster and "default" namespace by default
	I1129 09:20:28.381724  222878 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.074642707s)
	I1129 09:20:28.381753  222878 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1129 09:20:28.381780  222878 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1129 09:20:28.381828  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1129 09:20:28.381907  222878 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.042136021s)
	I1129 09:20:28.381924  222878 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1129 09:20:28.381944  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1129 09:20:31.974151  222878 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (3.592291332s)
	I1129 09:20:31.974192  222878 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1129 09:20:31.974218  222878 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1129 09:20:31.974299  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1129 09:20:32.697903  222878 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1129 09:20:32.697943  222878 cache_images.go:125] Successfully loaded all cached images
	I1129 09:20:32.697949  222878 cache_images.go:94] duration metric: took 11.878914483s to LoadCachedImages
	I1129 09:20:32.697961  222878 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1129 09:20:32.698052  222878 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-230403 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-230403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:20:32.698117  222878 ssh_runner.go:195] Run: sudo crictl info
	I1129 09:20:32.724003  222878 cni.go:84] Creating CNI manager for ""
	I1129 09:20:32.724023  222878 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:20:32.724042  222878 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:20:32.724064  222878 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-230403 NodeName:no-preload-230403 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:20:32.724177  222878 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-230403"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:20:32.724247  222878 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:20:32.734586  222878 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1129 09:20:32.734661  222878 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1129 09:20:32.744055  222878 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1129 09:20:32.744148  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1129 09:20:32.744244  222878 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256
	I1129 09:20:32.744287  222878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:20:32.744372  222878 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256
	I1129 09:20:32.744422  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1129 09:20:32.765160  222878 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1129 09:20:32.765194  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1129 09:20:32.765213  222878 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1129 09:20:32.765239  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1129 09:20:32.765317  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1129 09:20:32.779265  222878 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1129 09:20:32.779306  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1129 09:20:33.994121  222878 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:20:34.006964  222878 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1129 09:20:34.022992  222878 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:20:34.039936  222878 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1129 09:20:34.054478  222878 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:20:34.059158  222878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:20:34.071443  222878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:20:34.198077  222878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:20:34.225128  222878 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403 for IP: 192.168.85.2
	I1129 09:20:34.225153  222878 certs.go:195] generating shared ca certs ...
	I1129 09:20:34.225176  222878 certs.go:227] acquiring lock for ca certs: {Name:mke655c14945a8520f2f9de36531df923afb2bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:20:34.225330  222878 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-2317/.minikube/ca.key
	I1129 09:20:34.225385  222878 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.key
	I1129 09:20:34.225397  222878 certs.go:257] generating profile certs ...
	I1129 09:20:34.225460  222878 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.key
	I1129 09:20:34.225477  222878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt with IP's: []
	I1129 09:20:34.561780  222878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt ...
	I1129 09:20:34.561812  222878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt: {Name:mk0506510be8624c61cf78aca5533a42dbe12049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:20:34.562018  222878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.key ...
	I1129 09:20:34.562032  222878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.key: {Name:mk7728838f62624078d9f102edcc2e7e92fca24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:20:34.562134  222878 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.key.9c37d96b
	I1129 09:20:34.562155  222878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.crt.9c37d96b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1129 09:20:35.279064  222878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.crt.9c37d96b ...
	I1129 09:20:35.279097  222878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.crt.9c37d96b: {Name:mkb8ab5f6d41eda35913c9ea362db6a34366a395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:20:35.279295  222878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.key.9c37d96b ...
	I1129 09:20:35.279312  222878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.key.9c37d96b: {Name:mk21caee54335560e86fdf60eec601c387bdb604 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:20:35.279403  222878 certs.go:382] copying /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.crt.9c37d96b -> /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.crt
	I1129 09:20:35.279483  222878 certs.go:386] copying /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.key.9c37d96b -> /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.key
	I1129 09:20:35.279555  222878 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/proxy-client.key
	I1129 09:20:35.279573  222878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/proxy-client.crt with IP's: []
	I1129 09:20:35.662938  222878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/proxy-client.crt ...
	I1129 09:20:35.662968  222878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/proxy-client.crt: {Name:mk84c114a546c4abdb7a044023d46a90cfce8d04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:20:35.663145  222878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/proxy-client.key ...
	I1129 09:20:35.663161  222878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/proxy-client.key: {Name:mk0fc11a967c87ab7d123db8f16798c3182082c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:20:35.663352  222878 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137.pem (1338 bytes)
	W1129 09:20:35.663398  222878 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137_empty.pem, impossibly tiny 0 bytes
	I1129 09:20:35.663418  222878 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:20:35.663446  222878 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem (1082 bytes)
	I1129 09:20:35.663474  222878 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:20:35.663499  222878 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem (1679 bytes)
	I1129 09:20:35.663547  222878 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem (1708 bytes)
	I1129 09:20:35.664157  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:20:35.691460  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1129 09:20:35.717525  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:20:35.745851  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:20:35.769815  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 09:20:35.790501  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1129 09:20:35.812066  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:20:35.830915  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:20:35.849395  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137.pem --> /usr/share/ca-certificates/4137.pem (1338 bytes)
	I1129 09:20:35.872584  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem --> /usr/share/ca-certificates/41372.pem (1708 bytes)
	I1129 09:20:35.893049  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:20:35.918494  222878 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:20:35.936255  222878 ssh_runner.go:195] Run: openssl version
	I1129 09:20:35.943518  222878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41372.pem && ln -fs /usr/share/ca-certificates/41372.pem /etc/ssl/certs/41372.pem"
	I1129 09:20:35.954406  222878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41372.pem
	I1129 09:20:35.959997  222878 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:36 /usr/share/ca-certificates/41372.pem
	I1129 09:20:35.960085  222878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41372.pem
	I1129 09:20:36.006091  222878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41372.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:20:36.017475  222878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:20:36.027314  222878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:20:36.031927  222878 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:20:36.031999  222878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:20:36.075486  222878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:20:36.084604  222878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4137.pem && ln -fs /usr/share/ca-certificates/4137.pem /etc/ssl/certs/4137.pem"
	I1129 09:20:36.094214  222878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4137.pem
	I1129 09:20:36.098768  222878 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:36 /usr/share/ca-certificates/4137.pem
	I1129 09:20:36.098840  222878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4137.pem
	I1129 09:20:36.143207  222878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4137.pem /etc/ssl/certs/51391683.0"
	I1129 09:20:36.152425  222878 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:20:36.156708  222878 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:20:36.156761  222878 kubeadm.go:401] StartCluster: {Name:no-preload-230403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-230403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:20:36.156839  222878 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1129 09:20:36.156905  222878 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:20:36.184470  222878 cri.go:89] found id: ""
	I1129 09:20:36.184537  222878 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:20:36.193057  222878 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:20:36.201441  222878 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:20:36.201527  222878 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:20:36.210060  222878 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:20:36.210079  222878 kubeadm.go:158] found existing configuration files:
	
	I1129 09:20:36.210164  222878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:20:36.218503  222878 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:20:36.218590  222878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:20:36.226704  222878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:20:36.235392  222878 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:20:36.235519  222878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:20:36.243976  222878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:20:36.252727  222878 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:20:36.252802  222878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:20:36.261462  222878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:20:36.270714  222878 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:20:36.270782  222878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:20:36.278924  222878 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:20:36.329064  222878 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 09:20:36.329252  222878 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:20:36.365187  222878 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:20:36.365275  222878 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1129 09:20:36.365324  222878 kubeadm.go:319] OS: Linux
	I1129 09:20:36.365388  222878 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:20:36.365445  222878 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1129 09:20:36.365513  222878 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:20:36.365576  222878 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:20:36.365638  222878 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:20:36.365702  222878 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:20:36.365769  222878 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:20:36.365832  222878 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:20:36.365884  222878 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1129 09:20:36.435193  222878 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:20:36.435380  222878 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:20:36.435539  222878 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 09:20:36.441349  222878 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:20:36.446636  222878 out.go:252]   - Generating certificates and keys ...
	I1129 09:20:36.446799  222878 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:20:36.446906  222878 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:20:37.362846  222878 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 09:20:37.721165  222878 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:20:37.949639  222878 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:20:38.413017  222878 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:20:38.775660  222878 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:20:38.776186  222878 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-230403] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1129 09:20:39.104705  222878 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:20:39.105064  222878 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-230403] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1129 09:20:39.359331  222878 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:20:39.818423  222878 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:20:39.880381  222878 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:20:39.880638  222878 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:20:41.216161  222878 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:20:42.199207  222878 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 09:20:42.918813  222878 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:20:43.410581  222878 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:20:43.826978  222878 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:20:43.827675  222878 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:20:43.830453  222878 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	b9e829b9abde5       1611cd07b61d5       8 seconds ago       Running             busybox                   0                   ddd79130415cc       busybox                                          default
	f8f1e6dc2605a       97e04611ad434       16 seconds ago      Running             coredns                   0                   0f3ce8e6c4105       coredns-5dd5756b68-htmzr                         kube-system
	359d9432ef497       ba04bb24b9575       16 seconds ago      Running             storage-provisioner       0                   66250dabca2c6       storage-provisioner                              kube-system
	db1d77c6c85ea       b1a8c6f707935       27 seconds ago      Running             kindnet-cni               0                   78bf9329ff249       kindnet-58g5f                                    kube-system
	000a8de26034d       940f54a5bcae9       30 seconds ago      Running             kube-proxy                0                   ec6c1087a251f       kube-proxy-4jxrn                                 kube-system
	c6e9c9ab04ae1       46cc66ccc7c19       51 seconds ago      Running             kube-controller-manager   0                   16b3e81e696c9       kube-controller-manager-old-k8s-version-071895   kube-system
	41dff26eb8e67       9cdd6470f48c8       51 seconds ago      Running             etcd                      0                   468f2a4d8c24a       etcd-old-k8s-version-071895                      kube-system
	d34a4ced6121d       00543d2fe5d71       52 seconds ago      Running             kube-apiserver            0                   9630ead47757e       kube-apiserver-old-k8s-version-071895            kube-system
	7c5e9c05d20b8       762dce4090c5f       52 seconds ago      Running             kube-scheduler            0                   676bacb96168a       kube-scheduler-old-k8s-version-071895            kube-system
	
	
	==> containerd <==
	Nov 29 09:20:29 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:29.985394384Z" level=info msg="connecting to shim 359d9432ef4979d387512d5a2a5a3cd9fb7a0987f4a3540a23407b70f7faf163" address="unix:///run/containerd/s/34373f541c51fce0619cd6b7f9bbe560b47e8c8788713a29595219a5d22d901b" protocol=ttrpc version=3
	Nov 29 09:20:29 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:29.992937298Z" level=info msg="CreateContainer within sandbox \"0f3ce8e6c41050910070bab1a2edce113b2eb3bd98f3bca1d8006c18bcd1714f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 29 09:20:30 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:30.071685071Z" level=info msg="Container f8f1e6dc2605a052d9e0af268d75e52d11eef09c6da328c174daa4346e21359d: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:20:30 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:30.095345483Z" level=info msg="CreateContainer within sandbox \"0f3ce8e6c41050910070bab1a2edce113b2eb3bd98f3bca1d8006c18bcd1714f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8f1e6dc2605a052d9e0af268d75e52d11eef09c6da328c174daa4346e21359d\""
	Nov 29 09:20:30 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:30.097739089Z" level=info msg="StartContainer for \"f8f1e6dc2605a052d9e0af268d75e52d11eef09c6da328c174daa4346e21359d\""
	Nov 29 09:20:30 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:30.099238569Z" level=info msg="connecting to shim f8f1e6dc2605a052d9e0af268d75e52d11eef09c6da328c174daa4346e21359d" address="unix:///run/containerd/s/3150843ad07ed5a21377bb0ba6fe93d3c73033d9ccfa3b4a9e0ed16a5e8438c5" protocol=ttrpc version=3
	Nov 29 09:20:30 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:30.189374834Z" level=info msg="StartContainer for \"359d9432ef4979d387512d5a2a5a3cd9fb7a0987f4a3540a23407b70f7faf163\" returns successfully"
	Nov 29 09:20:30 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:30.269744369Z" level=info msg="StartContainer for \"f8f1e6dc2605a052d9e0af268d75e52d11eef09c6da328c174daa4346e21359d\" returns successfully"
	Nov 29 09:20:35 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:35.534277133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3abcbd08-d7c4-4a13-b94c-6f6424975411,Namespace:default,Attempt:0,}"
	Nov 29 09:20:35 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:35.597130515Z" level=info msg="connecting to shim ddd79130415cc8649c69caccfc081affa5f1da8a2517127cdbcf8d824a791490" address="unix:///run/containerd/s/6d78da511a42142891dae64b3eb6a171701a2aacf243055415398ac4ec21cd7a" namespace=k8s.io protocol=ttrpc version=3
	Nov 29 09:20:35 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:35.703469012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3abcbd08-d7c4-4a13-b94c-6f6424975411,Namespace:default,Attempt:0,} returns sandbox id \"ddd79130415cc8649c69caccfc081affa5f1da8a2517127cdbcf8d824a791490\""
	Nov 29 09:20:35 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:35.712136437Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.805646978Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.808907002Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937188"
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.812726259Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.815034818Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.816002472Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.103636897s"
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.816153291Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.820479635Z" level=info msg="CreateContainer within sandbox \"ddd79130415cc8649c69caccfc081affa5f1da8a2517127cdbcf8d824a791490\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.837353396Z" level=info msg="Container b9e829b9abde5402e2cbe089579fccb3fcaa2d4225461d6d9fe9bceddbff0c20: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.849004626Z" level=info msg="CreateContainer within sandbox \"ddd79130415cc8649c69caccfc081affa5f1da8a2517127cdbcf8d824a791490\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"b9e829b9abde5402e2cbe089579fccb3fcaa2d4225461d6d9fe9bceddbff0c20\""
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.852339424Z" level=info msg="StartContainer for \"b9e829b9abde5402e2cbe089579fccb3fcaa2d4225461d6d9fe9bceddbff0c20\""
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.855127486Z" level=info msg="connecting to shim b9e829b9abde5402e2cbe089579fccb3fcaa2d4225461d6d9fe9bceddbff0c20" address="unix:///run/containerd/s/6d78da511a42142891dae64b3eb6a171701a2aacf243055415398ac4ec21cd7a" protocol=ttrpc version=3
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.962061310Z" level=info msg="StartContainer for \"b9e829b9abde5402e2cbe089579fccb3fcaa2d4225461d6d9fe9bceddbff0c20\" returns successfully"
	Nov 29 09:20:44 old-k8s-version-071895 containerd[758]: E1129 09:20:44.932672     758 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [f8f1e6dc2605a052d9e0af268d75e52d11eef09c6da328c174daa4346e21359d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51515 - 3634 "HINFO IN 3397046818821823914.8081764445601178770. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005882235s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-071895
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-071895
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=old-k8s-version-071895
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_20_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:19:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-071895
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:20:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:20:33 +0000   Sat, 29 Nov 2025 09:19:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:20:33 +0000   Sat, 29 Nov 2025 09:19:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:20:33 +0000   Sat, 29 Nov 2025 09:19:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:20:33 +0000   Sat, 29 Nov 2025 09:20:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-071895
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                453a3f46-be9b-4440-b54b-7bd5b2275c63
	  Boot ID:                    6647f078-4edd-40c5-9d0e-49eb5ed00bd7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-5dd5756b68-htmzr                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     31s
	  kube-system                 etcd-old-k8s-version-071895                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         43s
	  kube-system                 kindnet-58g5f                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-old-k8s-version-071895             250m (12%)    0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-controller-manager-old-k8s-version-071895    200m (10%)    0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-4jxrn                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-old-k8s-version-071895             100m (5%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 30s   kube-proxy       
	  Normal  Starting                 44s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  44s   kubelet          Node old-k8s-version-071895 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s   kubelet          Node old-k8s-version-071895 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s   kubelet          Node old-k8s-version-071895 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  43s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           32s   node-controller  Node old-k8s-version-071895 event: Registered Node old-k8s-version-071895 in Controller
	  Normal  NodeReady                17s   kubelet          Node old-k8s-version-071895 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014634] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.570975] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032231] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767655] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.282538] kauditd_printk_skb: 36 callbacks suppressed
	[Nov29 08:39] FS-Cache: Duplicate cookie detected
	[  +0.000727] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001077] FS-Cache: O-cookie d=00000000b08097f7{9P.session} n=00000000a17ba85f
	[  +0.001074] FS-Cache: O-key=[10] '34323935323231393134'
	[  +0.000776] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000954] FS-Cache: N-cookie d=00000000b08097f7{9P.session} n=00000000534469ad
	[  +0.001092] FS-Cache: N-key=[10] '34323935323231393134'
	[Nov29 09:19] hrtimer: interrupt took 12545193 ns
	
	
	==> etcd [41dff26eb8e679cc29a87f83f59d117073bdaeb9ac41cb8ac8ee1cb32c92511a] <==
	{"level":"info","ts":"2025-11-29T09:19:54.897611Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-29T09:19:54.901566Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-29T09:19:54.901625Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-29T09:19:55.060661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-29T09:19:55.060785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-29T09:19:55.060882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-29T09:19:55.060949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-29T09:19:55.060985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-29T09:19:55.061056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-29T09:19:55.06113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-29T09:19:55.062447Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-071895 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-29T09:19:55.062536Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-29T09:19:55.063797Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-29T09:19:55.063991Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:19:55.065852Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-29T09:19:55.066951Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-29T09:19:55.067534Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:19:55.067717Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:19:55.071793Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:19:55.071959Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-29T09:19:55.072006Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-29T09:20:15.052803Z","caller":"traceutil/trace.go:171","msg":"trace[25407896] transaction","detail":"{read_only:false; response_revision:297; number_of_response:1; }","duration":"106.818617ms","start":"2025-11-29T09:20:14.945956Z","end":"2025-11-29T09:20:15.052774Z","steps":["trace[25407896] 'process raft request'  (duration: 106.616925ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:20:15.064957Z","caller":"traceutil/trace.go:171","msg":"trace[1542162002] transaction","detail":"{read_only:false; response_revision:300; number_of_response:1; }","duration":"106.599802ms","start":"2025-11-29T09:20:14.95834Z","end":"2025-11-29T09:20:15.064939Z","steps":["trace[1542162002] 'process raft request'  (duration: 106.563165ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:20:15.065342Z","caller":"traceutil/trace.go:171","msg":"trace[758518492] transaction","detail":"{read_only:false; response_revision:298; number_of_response:1; }","duration":"119.137568ms","start":"2025-11-29T09:20:14.946194Z","end":"2025-11-29T09:20:15.065332Z","steps":["trace[758518492] 'process raft request'  (duration: 118.584375ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:20:15.065438Z","caller":"traceutil/trace.go:171","msg":"trace[2009828336] transaction","detail":"{read_only:false; response_revision:299; number_of_response:1; }","duration":"112.325548ms","start":"2025-11-29T09:20:14.953105Z","end":"2025-11-29T09:20:15.065431Z","steps":["trace[2009828336] 'process raft request'  (duration: 111.76593ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:20:46 up  1:03,  0 user,  load average: 2.65, 2.59, 2.59
	Linux old-k8s-version-071895 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [db1d77c6c85eaf5ebd7dc839fb54d40271ee80c34795b249a47534f35c064f1c] <==
	I1129 09:20:19.083145       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:20:19.083520       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 09:20:19.083647       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:20:19.083659       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:20:19.083671       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:20:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:20:19.286160       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:20:19.286239       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:20:19.286373       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:20:19.287882       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:20:19.580767       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:20:19.580802       1 metrics.go:72] Registering metrics
	I1129 09:20:19.580865       1 controller.go:711] "Syncing nftables rules"
	I1129 09:20:29.287220       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:20:29.287264       1 main.go:301] handling current node
	I1129 09:20:39.286004       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:20:39.286281       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d34a4ced6121deea5f0e58655a9a45e86fccdde412c9acf3d1e35ab330cd1b4b] <==
	I1129 09:19:58.687723       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1129 09:19:58.689876       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1129 09:19:58.689902       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1129 09:19:58.690079       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1129 09:19:58.691126       1 aggregator.go:166] initial CRD sync complete...
	I1129 09:19:58.691143       1 autoregister_controller.go:141] Starting autoregister controller
	I1129 09:19:58.691150       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 09:19:58.691158       1 cache.go:39] Caches are synced for autoregister controller
	E1129 09:19:58.752402       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I1129 09:19:58.885509       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:19:59.184340       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:19:59.194717       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:19:59.195065       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:20:00.545658       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:20:00.693098       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:20:00.863619       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:20:00.877801       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1129 09:20:00.879300       1 controller.go:624] quota admission added evaluator for: endpoints
	I1129 09:20:00.885677       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:20:01.758115       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1129 09:20:02.382429       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1129 09:20:02.396930       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:20:02.411199       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1129 09:20:15.297358       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1129 09:20:15.463834       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [c6e9c9ab04ae16e634fbb9b4e1d16587356b43ecc4799412da2e56e79409870b] <==
	I1129 09:20:15.111764       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-old-k8s-version-071895" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1129 09:20:15.174792       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-071895" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1129 09:20:15.320980       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1129 09:20:15.351255       1 shared_informer.go:318] Caches are synced for garbage collector
	I1129 09:20:15.351286       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1129 09:20:15.384164       1 shared_informer.go:318] Caches are synced for garbage collector
	I1129 09:20:15.486462       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4jxrn"
	I1129 09:20:15.486489       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-58g5f"
	I1129 09:20:15.643761       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-rk2xx"
	I1129 09:20:15.661237       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-htmzr"
	I1129 09:20:15.701868       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="389.914526ms"
	I1129 09:20:15.744722       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.452889ms"
	I1129 09:20:15.746651       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.963µs"
	I1129 09:20:17.246540       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1129 09:20:17.300225       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-rk2xx"
	I1129 09:20:17.312673       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.947307ms"
	I1129 09:20:17.322261       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.52965ms"
	I1129 09:20:17.323333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="806.512µs"
	I1129 09:20:29.431259       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.143µs"
	I1129 09:20:29.490111       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="119.681µs"
	I1129 09:20:30.130582       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I1129 09:20:30.130619       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-htmzr" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-htmzr"
	I1129 09:20:30.131138       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1129 09:20:31.018335       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="24.226889ms"
	I1129 09:20:31.018459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.794µs"
	
	
	==> kube-proxy [000a8de26034dcdc6da38237d77f79fa914b3088e593f0bbd13e14b39b42bf00] <==
	I1129 09:20:16.555876       1 server_others.go:69] "Using iptables proxy"
	I1129 09:20:16.579548       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1129 09:20:16.643168       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:20:16.645058       1 server_others.go:152] "Using iptables Proxier"
	I1129 09:20:16.645109       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1129 09:20:16.645128       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1129 09:20:16.645164       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1129 09:20:16.645384       1 server.go:846] "Version info" version="v1.28.0"
	I1129 09:20:16.645401       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:20:16.657042       1 config.go:188] "Starting service config controller"
	I1129 09:20:16.657067       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1129 09:20:16.657128       1 config.go:97] "Starting endpoint slice config controller"
	I1129 09:20:16.657132       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1129 09:20:16.657163       1 config.go:315] "Starting node config controller"
	I1129 09:20:16.657166       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1129 09:20:16.757328       1 shared_informer.go:318] Caches are synced for node config
	I1129 09:20:16.757472       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1129 09:20:16.757514       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [7c5e9c05d20b870a1e96cdb0bbf1479f013609a2bbcde73ff5f9b106d4a35049] <==
	I1129 09:19:58.666858       1 serving.go:348] Generated self-signed cert in-memory
	W1129 09:20:00.321955       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1129 09:20:00.322235       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1129 09:20:00.322326       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1129 09:20:00.322411       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1129 09:20:00.396927       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1129 09:20:00.399854       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:20:00.419574       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1129 09:20:00.431997       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:20:00.432131       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1129 09:20:00.432227       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1129 09:20:00.482293       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1129 09:20:00.482341       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1129 09:20:01.932942       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 29 09:20:15 old-k8s-version-071895 kubelet[1545]: I1129 09:20:15.542826    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3e4bdb82-85e5-468b-80dc-0481c990f117-kube-proxy\") pod \"kube-proxy-4jxrn\" (UID: \"3e4bdb82-85e5-468b-80dc-0481c990f117\") " pod="kube-system/kube-proxy-4jxrn"
	Nov 29 09:20:15 old-k8s-version-071895 kubelet[1545]: I1129 09:20:15.542946    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d4743cee-0834-4a44-9cf7-d0228aa5b843-cni-cfg\") pod \"kindnet-58g5f\" (UID: \"d4743cee-0834-4a44-9cf7-d0228aa5b843\") " pod="kube-system/kindnet-58g5f"
	Nov 29 09:20:15 old-k8s-version-071895 kubelet[1545]: I1129 09:20:15.543093    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4743cee-0834-4a44-9cf7-d0228aa5b843-xtables-lock\") pod \"kindnet-58g5f\" (UID: \"d4743cee-0834-4a44-9cf7-d0228aa5b843\") " pod="kube-system/kindnet-58g5f"
	Nov 29 09:20:15 old-k8s-version-071895 kubelet[1545]: I1129 09:20:15.543236    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcrqh\" (UniqueName: \"kubernetes.io/projected/3e4bdb82-85e5-468b-80dc-0481c990f117-kube-api-access-zcrqh\") pod \"kube-proxy-4jxrn\" (UID: \"3e4bdb82-85e5-468b-80dc-0481c990f117\") " pod="kube-system/kube-proxy-4jxrn"
	Nov 29 09:20:15 old-k8s-version-071895 kubelet[1545]: I1129 09:20:15.543388    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4743cee-0834-4a44-9cf7-d0228aa5b843-lib-modules\") pod \"kindnet-58g5f\" (UID: \"d4743cee-0834-4a44-9cf7-d0228aa5b843\") " pod="kube-system/kindnet-58g5f"
	Nov 29 09:20:15 old-k8s-version-071895 kubelet[1545]: I1129 09:20:15.543527    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfjbl\" (UniqueName: \"kubernetes.io/projected/d4743cee-0834-4a44-9cf7-d0228aa5b843-kube-api-access-hfjbl\") pod \"kindnet-58g5f\" (UID: \"d4743cee-0834-4a44-9cf7-d0228aa5b843\") " pod="kube-system/kindnet-58g5f"
	Nov 29 09:20:16 old-k8s-version-071895 kubelet[1545]: I1129 09:20:16.904236    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4jxrn" podStartSLOduration=1.904182809 podCreationTimestamp="2025-11-29 09:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:20:16.903944744 +0000 UTC m=+14.574033893" watchObservedRunningTime="2025-11-29 09:20:16.904182809 +0000 UTC m=+14.574271949"
	Nov 29 09:20:22 old-k8s-version-071895 kubelet[1545]: I1129 09:20:22.690149    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-58g5f" podStartSLOduration=5.068996977 podCreationTimestamp="2025-11-29 09:20:15 +0000 UTC" firstStartedPulling="2025-11-29 09:20:16.131889821 +0000 UTC m=+13.801978953" lastFinishedPulling="2025-11-29 09:20:18.75299704 +0000 UTC m=+16.423086171" observedRunningTime="2025-11-29 09:20:19.919717563 +0000 UTC m=+17.589806703" watchObservedRunningTime="2025-11-29 09:20:22.690104195 +0000 UTC m=+20.360193335"
	Nov 29 09:20:29 old-k8s-version-071895 kubelet[1545]: I1129 09:20:29.372571    1545 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 29 09:20:29 old-k8s-version-071895 kubelet[1545]: I1129 09:20:29.424392    1545 topology_manager.go:215] "Topology Admit Handler" podUID="784fe707-ae15-4eae-a70c-ec084ce3d812" podNamespace="kube-system" podName="storage-provisioner"
	Nov 29 09:20:29 old-k8s-version-071895 kubelet[1545]: I1129 09:20:29.431465    1545 topology_manager.go:215] "Topology Admit Handler" podUID="c6b5f2ee-df4f-40a3-be2e-6f16e858e497" podNamespace="kube-system" podName="coredns-5dd5756b68-htmzr"
	Nov 29 09:20:29 old-k8s-version-071895 kubelet[1545]: I1129 09:20:29.459512    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/784fe707-ae15-4eae-a70c-ec084ce3d812-tmp\") pod \"storage-provisioner\" (UID: \"784fe707-ae15-4eae-a70c-ec084ce3d812\") " pod="kube-system/storage-provisioner"
	Nov 29 09:20:29 old-k8s-version-071895 kubelet[1545]: I1129 09:20:29.459744    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzcr9\" (UniqueName: \"kubernetes.io/projected/784fe707-ae15-4eae-a70c-ec084ce3d812-kube-api-access-hzcr9\") pod \"storage-provisioner\" (UID: \"784fe707-ae15-4eae-a70c-ec084ce3d812\") " pod="kube-system/storage-provisioner"
	Nov 29 09:20:29 old-k8s-version-071895 kubelet[1545]: I1129 09:20:29.459885    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch9tz\" (UniqueName: \"kubernetes.io/projected/c6b5f2ee-df4f-40a3-be2e-6f16e858e497-kube-api-access-ch9tz\") pod \"coredns-5dd5756b68-htmzr\" (UID: \"c6b5f2ee-df4f-40a3-be2e-6f16e858e497\") " pod="kube-system/coredns-5dd5756b68-htmzr"
	Nov 29 09:20:29 old-k8s-version-071895 kubelet[1545]: I1129 09:20:29.460022    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6b5f2ee-df4f-40a3-be2e-6f16e858e497-config-volume\") pod \"coredns-5dd5756b68-htmzr\" (UID: \"c6b5f2ee-df4f-40a3-be2e-6f16e858e497\") " pod="kube-system/coredns-5dd5756b68-htmzr"
	Nov 29 09:20:30 old-k8s-version-071895 kubelet[1545]: I1129 09:20:30.997910    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.997856203 podCreationTimestamp="2025-11-29 09:20:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:20:30.970917027 +0000 UTC m=+28.641006167" watchObservedRunningTime="2025-11-29 09:20:30.997856203 +0000 UTC m=+28.667945343"
	Nov 29 09:20:33 old-k8s-version-071895 kubelet[1545]: I1129 09:20:33.708750    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-htmzr" podStartSLOduration=18.708653504 podCreationTimestamp="2025-11-29 09:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:20:30.99830195 +0000 UTC m=+28.668391090" watchObservedRunningTime="2025-11-29 09:20:33.708653504 +0000 UTC m=+31.378742653"
	Nov 29 09:20:33 old-k8s-version-071895 kubelet[1545]: I1129 09:20:33.709581    1545 topology_manager.go:215] "Topology Admit Handler" podUID="3abcbd08-d7c4-4a13-b94c-6f6424975411" podNamespace="default" podName="busybox"
	Nov 29 09:20:33 old-k8s-version-071895 kubelet[1545]: W1129 09:20:33.759772    1545 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-071895" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-071895' and this object
	Nov 29 09:20:33 old-k8s-version-071895 kubelet[1545]: E1129 09:20:33.759821    1545 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-071895" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-071895' and this object
	Nov 29 09:20:33 old-k8s-version-071895 kubelet[1545]: I1129 09:20:33.794129    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w6jg\" (UniqueName: \"kubernetes.io/projected/3abcbd08-d7c4-4a13-b94c-6f6424975411-kube-api-access-7w6jg\") pod \"busybox\" (UID: \"3abcbd08-d7c4-4a13-b94c-6f6424975411\") " pod="default/busybox"
	Nov 29 09:20:34 old-k8s-version-071895 kubelet[1545]: E1129 09:20:34.906850    1545 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 29 09:20:34 old-k8s-version-071895 kubelet[1545]: E1129 09:20:34.908357    1545 projected.go:198] Error preparing data for projected volume kube-api-access-7w6jg for pod default/busybox: failed to sync configmap cache: timed out waiting for the condition
	Nov 29 09:20:34 old-k8s-version-071895 kubelet[1545]: E1129 09:20:34.908523    1545 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3abcbd08-d7c4-4a13-b94c-6f6424975411-kube-api-access-7w6jg podName:3abcbd08-d7c4-4a13-b94c-6f6424975411 nodeName:}" failed. No retries permitted until 2025-11-29 09:20:35.408496185 +0000 UTC m=+33.078585316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7w6jg" (UniqueName: "kubernetes.io/projected/3abcbd08-d7c4-4a13-b94c-6f6424975411-kube-api-access-7w6jg") pod "busybox" (UID: "3abcbd08-d7c4-4a13-b94c-6f6424975411") : failed to sync configmap cache: timed out waiting for the condition
	Nov 29 09:20:37 old-k8s-version-071895 kubelet[1545]: I1129 09:20:37.992486    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.8817945099999998 podCreationTimestamp="2025-11-29 09:20:33 +0000 UTC" firstStartedPulling="2025-11-29 09:20:35.706009491 +0000 UTC m=+33.376098623" lastFinishedPulling="2025-11-29 09:20:37.816649729 +0000 UTC m=+35.486738869" observedRunningTime="2025-11-29 09:20:37.991952135 +0000 UTC m=+35.662041292" watchObservedRunningTime="2025-11-29 09:20:37.992434756 +0000 UTC m=+35.662523896"
	
	
	==> storage-provisioner [359d9432ef4979d387512d5a2a5a3cd9fb7a0987f4a3540a23407b70f7faf163] <==
	I1129 09:20:30.214942       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:20:30.235967       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:20:30.236210       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1129 09:20:30.252227       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:20:30.255628       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-071895_105725d4-e591-4aa3-af10-2659a9fed2c2!
	I1129 09:20:30.273258       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d8dbb900-fced-4c3d-a6ea-15b88c536670", APIVersion:"v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-071895_105725d4-e591-4aa3-af10-2659a9fed2c2 became leader
	I1129 09:20:30.355956       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-071895_105725d4-e591-4aa3-af10-2659a9fed2c2!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-071895 -n old-k8s-version-071895
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-071895 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-071895
helpers_test.go:243: (dbg) docker inspect old-k8s-version-071895:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cb39490005387f20e45d85449d7cd3926a38c4f6954c93fdb4e9a9d8c1dd56c0",
	        "Created": "2025-11-29T09:19:35.843753446Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 219639,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:19:35.922684387Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/cb39490005387f20e45d85449d7cd3926a38c4f6954c93fdb4e9a9d8c1dd56c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cb39490005387f20e45d85449d7cd3926a38c4f6954c93fdb4e9a9d8c1dd56c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/cb39490005387f20e45d85449d7cd3926a38c4f6954c93fdb4e9a9d8c1dd56c0/hosts",
	        "LogPath": "/var/lib/docker/containers/cb39490005387f20e45d85449d7cd3926a38c4f6954c93fdb4e9a9d8c1dd56c0/cb39490005387f20e45d85449d7cd3926a38c4f6954c93fdb4e9a9d8c1dd56c0-json.log",
	        "Name": "/old-k8s-version-071895",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-071895:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-071895",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cb39490005387f20e45d85449d7cd3926a38c4f6954c93fdb4e9a9d8c1dd56c0",
	                "LowerDir": "/var/lib/docker/overlay2/39dddc1dab2647088ef22e0a22ddfff676f8c9bdc540988436a11252cc093aa5-init/diff:/var/lib/docker/overlay2/fc2ab0019906b90b3f033fa414f560878b73f7ff0ebdf77a0b554a40813009d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/39dddc1dab2647088ef22e0a22ddfff676f8c9bdc540988436a11252cc093aa5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/39dddc1dab2647088ef22e0a22ddfff676f8c9bdc540988436a11252cc093aa5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/39dddc1dab2647088ef22e0a22ddfff676f8c9bdc540988436a11252cc093aa5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-071895",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-071895/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-071895",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-071895",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-071895",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "60a614c2d74d8f721c5d191b45e8f8728a313afe9d5488b154acf3a0ac189fb9",
	            "SandboxKey": "/var/run/docker/netns/60a614c2d74d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-071895": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:be:6c:06:cc:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "46e34ec2f3d70587bfaede542f848856d8f0dbb2dcdc34fe102884ad13766b95",
	                    "EndpointID": "2663a5dbde2357e0d7269cf1f8d9d8bb11ffe6e49aa8754901238cb93acbbf02",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-071895",
	                        "cb3949000538"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-071895 -n old-k8s-version-071895
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-071895 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-071895 logs -n 25: (1.675309086s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-420729 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo containerd config dump                                                                                                                                                                                                        │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo crio config                                                                                                                                                                                                                   │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ delete  │ -p cilium-420729                                                                                                                                                                                                                                    │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ -p force-systemd-env-559836 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-559836 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:16 UTC │
	│ ssh     │ force-systemd-env-559836 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-559836 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ delete  │ -p force-systemd-env-559836                                                                                                                                                                                                                         │ force-systemd-env-559836 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ start   │ -p cert-expiration-592440 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-592440   │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ delete  │ -p running-upgrade-115889                                                                                                                                                                                                                           │ running-upgrade-115889   │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ start   │ -p cert-options-515442 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-515442      │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:19 UTC │
	│ ssh     │ cert-options-515442 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-515442      │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:19 UTC │
	│ ssh     │ -p cert-options-515442 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-515442      │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:19 UTC │
	│ delete  │ -p cert-options-515442                                                                                                                                                                                                                              │ cert-options-515442      │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:19 UTC │
	│ start   │ -p old-k8s-version-071895 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-071895   │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:20 UTC │
	│ start   │ -p cert-expiration-592440 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-592440   │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:20 UTC │
	│ delete  │ -p cert-expiration-592440                                                                                                                                                                                                                           │ cert-expiration-592440   │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:20 UTC │
	│ start   │ -p no-preload-230403 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-230403        │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:20:12
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:20:12.939624  222878 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:20:12.939853  222878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:20:12.939881  222878 out.go:374] Setting ErrFile to fd 2...
	I1129 09:20:12.939901  222878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:20:12.940241  222878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
	I1129 09:20:12.940820  222878 out.go:368] Setting JSON to false
	I1129 09:20:12.941892  222878 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3764,"bootTime":1764404249,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1129 09:20:12.941996  222878 start.go:143] virtualization:  
	I1129 09:20:12.947843  222878 out.go:179] * [no-preload-230403] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 09:20:12.951543  222878 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:20:12.951778  222878 notify.go:221] Checking for updates...
	I1129 09:20:12.959740  222878 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:20:12.963748  222878 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:20:12.967028  222878 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube
	I1129 09:20:12.970194  222878 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 09:20:12.973266  222878 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:20:12.976789  222878 config.go:182] Loaded profile config "old-k8s-version-071895": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:20:12.976879  222878 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:20:13.015916  222878 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 09:20:13.016116  222878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:20:13.089040  222878 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 09:20:13.078615429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:20:13.089149  222878 docker.go:319] overlay module found
	I1129 09:20:13.094585  222878 out.go:179] * Using the docker driver based on user configuration
	I1129 09:20:13.101060  222878 start.go:309] selected driver: docker
	I1129 09:20:13.101087  222878 start.go:927] validating driver "docker" against <nil>
	I1129 09:20:13.101110  222878 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:20:13.101860  222878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:20:13.162298  222878 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 09:20:13.152737541 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:20:13.162462  222878 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 09:20:13.162689  222878 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:20:13.165689  222878 out.go:179] * Using Docker driver with root privileges
	I1129 09:20:13.168555  222878 cni.go:84] Creating CNI manager for ""
	I1129 09:20:13.168702  222878 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:20:13.168717  222878 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:20:13.168799  222878 start.go:353] cluster config:
	{Name:no-preload-230403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-230403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:20:13.171944  222878 out.go:179] * Starting "no-preload-230403" primary control-plane node in "no-preload-230403" cluster
	I1129 09:20:13.174795  222878 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1129 09:20:13.177867  222878 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:20:13.180600  222878 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:20:13.180815  222878 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:20:13.180863  222878 cache.go:107] acquiring lock: {Name:mkc9ca05df03f187ae0239342774baa6ad8c9aea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:20:13.180958  222878 cache.go:107] acquiring lock: {Name:mk1a5c919477c9b6035d1da624b0b2445dbe0e73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:20:13.181026  222878 cache.go:115] /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1129 09:20:13.181043  222878 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 86.212µs
	I1129 09:20:13.181062  222878 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1129 09:20:13.181080  222878 cache.go:107] acquiring lock: {Name:mk74fc1ce0ee5a4f599a03d41c7dab91b2a2e933 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:20:13.181115  222878 cache.go:115] /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1129 09:20:13.181125  222878 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 46.598µs
	I1129 09:20:13.181131  222878 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1129 09:20:13.181141  222878 cache.go:107] acquiring lock: {Name:mk8695629c5903582c523a837d766d417499d914 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:20:13.181179  222878 cache.go:115] /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1129 09:20:13.181189  222878 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 49.445µs
	I1129 09:20:13.181196  222878 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1129 09:20:13.181205  222878 cache.go:107] acquiring lock: {Name:mk6962b4fc4c58f41448580e388a757daf8f6018 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:20:13.181239  222878 cache.go:115] /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1129 09:20:13.181249  222878 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 44.94µs
	I1129 09:20:13.181255  222878 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1129 09:20:13.181269  222878 cache.go:107] acquiring lock: {Name:mk75f52747e0531666c302459e925614b33b76b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:20:13.181314  222878 cache.go:115] /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1129 09:20:13.181323  222878 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 55.639µs
	I1129 09:20:13.181332  222878 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1129 09:20:13.181345  222878 cache.go:107] acquiring lock: {Name:mke59d5887f27460b7717e6fa1d7c7be222b2ad7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:20:13.181380  222878 cache.go:115] /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1129 09:20:13.181391  222878 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 46.433µs
	I1129 09:20:13.181396  222878 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1129 09:20:13.181409  222878 cache.go:107] acquiring lock: {Name:mkece740ade6508db73b1e245e73f976785e2996 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:20:13.181442  222878 cache.go:115] /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1129 09:20:13.181450  222878 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 45.654µs
	I1129 09:20:13.181455  222878 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1129 09:20:13.181552  222878 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/config.json ...
	I1129 09:20:13.181573  222878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/config.json: {Name:mkedfced3d2b7fa7d1f9faae9aecd4cdc6897bf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:20:13.181779  222878 cache.go:115] /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1129 09:20:13.181796  222878 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 942.365µs
	I1129 09:20:13.181804  222878 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1129 09:20:13.181857  222878 cache.go:87] Successfully saved all images to host disk.
	I1129 09:20:13.201388  222878 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:20:13.201410  222878 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:20:13.201431  222878 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:20:13.201462  222878 start.go:360] acquireMachinesLock for no-preload-230403: {Name:mk2a91c20925489376678f93ce44b3d1de57601f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:20:13.201622  222878 start.go:364] duration metric: took 139.242µs to acquireMachinesLock for "no-preload-230403"
	I1129 09:20:13.201663  222878 start.go:93] Provisioning new machine with config: &{Name:no-preload-230403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-230403 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:20:13.201746  222878 start.go:125] createHost starting for "" (driver="docker")
	I1129 09:20:09.378511  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:09.878391  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:10.379008  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:10.879016  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:11.378477  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:11.879067  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:12.378498  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:12.878370  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:13.378426  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:13.879213  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:14.378760  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:14.880612  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:15.379061  219229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:20:15.530412  219229 kubeadm.go:1114] duration metric: took 11.369681639s to wait for elevateKubeSystemPrivileges
	I1129 09:20:15.530446  219229 kubeadm.go:403] duration metric: took 31.525981112s to StartCluster
	I1129 09:20:15.530463  219229 settings.go:142] acquiring lock: {Name:mk44917d1324740eeda65bf3aa312ad1561d3ed4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:20:15.530529  219229 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:20:15.531211  219229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/kubeconfig: {Name:mk3c09eb9158ba85342a695b6ac4b1a5f69e1b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:20:15.531425  219229 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:20:15.531520  219229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 09:20:15.531760  219229 config.go:182] Loaded profile config "old-k8s-version-071895": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:20:15.531752  219229 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:20:15.531869  219229 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-071895"
	I1129 09:20:15.531886  219229 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-071895"
	I1129 09:20:15.531914  219229 host.go:66] Checking if "old-k8s-version-071895" exists ...
	I1129 09:20:15.532442  219229 cli_runner.go:164] Run: docker container inspect old-k8s-version-071895 --format={{.State.Status}}
	I1129 09:20:15.532702  219229 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-071895"
	I1129 09:20:15.532736  219229 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-071895"
	I1129 09:20:15.533094  219229 cli_runner.go:164] Run: docker container inspect old-k8s-version-071895 --format={{.State.Status}}
	I1129 09:20:15.536113  219229 out.go:179] * Verifying Kubernetes components...
	I1129 09:20:15.539443  219229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:20:15.574128  219229 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-071895"
	I1129 09:20:15.574169  219229 host.go:66] Checking if "old-k8s-version-071895" exists ...
	I1129 09:20:15.574614  219229 cli_runner.go:164] Run: docker container inspect old-k8s-version-071895 --format={{.State.Status}}
	I1129 09:20:15.575661  219229 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:20:15.578616  219229 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:20:15.578636  219229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:20:15.578703  219229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:20:15.596399  219229 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:20:15.596427  219229 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:20:15.596503  219229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:20:15.630157  219229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:20:15.639128  219229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:20:15.896152  219229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:20:15.896336  219229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 09:20:16.015161  219229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:20:16.026843  219229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:20:17.194520  219229 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.298139458s)
	I1129 09:20:17.194560  219229 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1129 09:20:17.195641  219229 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.299459942s)
	I1129 09:20:17.196336  219229 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-071895" to be "Ready" ...
	I1129 09:20:17.598641  219229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.583439516s)
	I1129 09:20:17.598752  219229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.571873758s)
	I1129 09:20:17.633446  219229 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1129 09:20:13.207006  222878 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 09:20:13.207293  222878 start.go:159] libmachine.API.Create for "no-preload-230403" (driver="docker")
	I1129 09:20:13.207340  222878 client.go:173] LocalClient.Create starting
	I1129 09:20:13.207488  222878 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem
	I1129 09:20:13.207529  222878 main.go:143] libmachine: Decoding PEM data...
	I1129 09:20:13.207573  222878 main.go:143] libmachine: Parsing certificate...
	I1129 09:20:13.207655  222878 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem
	I1129 09:20:13.207690  222878 main.go:143] libmachine: Decoding PEM data...
	I1129 09:20:13.207710  222878 main.go:143] libmachine: Parsing certificate...
	I1129 09:20:13.208128  222878 cli_runner.go:164] Run: docker network inspect no-preload-230403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:20:13.227770  222878 cli_runner.go:211] docker network inspect no-preload-230403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:20:13.227856  222878 network_create.go:284] running [docker network inspect no-preload-230403] to gather additional debugging logs...
	I1129 09:20:13.227880  222878 cli_runner.go:164] Run: docker network inspect no-preload-230403
	W1129 09:20:13.250504  222878 cli_runner.go:211] docker network inspect no-preload-230403 returned with exit code 1
	I1129 09:20:13.250537  222878 network_create.go:287] error running [docker network inspect no-preload-230403]: docker network inspect no-preload-230403: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-230403 not found
	I1129 09:20:13.250551  222878 network_create.go:289] output of [docker network inspect no-preload-230403]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-230403 not found
	
	** /stderr **
	I1129 09:20:13.250655  222878 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:20:13.269213  222878 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8664e809540f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:5a:a5:48:89:fb} reservation:<nil>}
	I1129 09:20:13.269665  222878 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fe5a1fed3d29 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8e:0c:ca:69:14:77} reservation:<nil>}
	I1129 09:20:13.270007  222878 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c3b36bc67c6b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:2d:06:dd:2d:03} reservation:<nil>}
	I1129 09:20:13.270333  222878 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-46e34ec2f3d7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7a:63:b9:c9:b8:a0} reservation:<nil>}
	I1129 09:20:13.270853  222878 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a000e0}
	I1129 09:20:13.270885  222878 network_create.go:124] attempt to create docker network no-preload-230403 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1129 09:20:13.270944  222878 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-230403 no-preload-230403
	I1129 09:20:13.339116  222878 network_create.go:108] docker network no-preload-230403 192.168.85.0/24 created
	I1129 09:20:13.339148  222878 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-230403" container
	I1129 09:20:13.339222  222878 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:20:13.358931  222878 cli_runner.go:164] Run: docker volume create no-preload-230403 --label name.minikube.sigs.k8s.io=no-preload-230403 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:20:13.376848  222878 oci.go:103] Successfully created a docker volume no-preload-230403
	I1129 09:20:13.376977  222878 cli_runner.go:164] Run: docker run --rm --name no-preload-230403-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-230403 --entrypoint /usr/bin/test -v no-preload-230403:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:20:13.960824  222878 oci.go:107] Successfully prepared a docker volume no-preload-230403
	I1129 09:20:13.960886  222878 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	W1129 09:20:13.961020  222878 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1129 09:20:13.961137  222878 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:20:14.052602  222878 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-230403 --name no-preload-230403 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-230403 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-230403 --network no-preload-230403 --ip 192.168.85.2 --volume no-preload-230403:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:20:14.434508  222878 cli_runner.go:164] Run: docker container inspect no-preload-230403 --format={{.State.Running}}
	I1129 09:20:14.469095  222878 cli_runner.go:164] Run: docker container inspect no-preload-230403 --format={{.State.Status}}
	I1129 09:20:14.505837  222878 cli_runner.go:164] Run: docker exec no-preload-230403 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:20:14.574820  222878 oci.go:144] the created container "no-preload-230403" has a running status.
	I1129 09:20:14.574847  222878 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-2317/.minikube/machines/no-preload-230403/id_rsa...
	I1129 09:20:14.765899  222878 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-2317/.minikube/machines/no-preload-230403/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:20:14.803197  222878 cli_runner.go:164] Run: docker container inspect no-preload-230403 --format={{.State.Status}}
	I1129 09:20:14.838341  222878 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:20:14.838366  222878 kic_runner.go:114] Args: [docker exec --privileged no-preload-230403 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:20:14.971747  222878 cli_runner.go:164] Run: docker container inspect no-preload-230403 --format={{.State.Status}}
	I1129 09:20:14.997195  222878 machine.go:94] provisionDockerMachine start ...
	I1129 09:20:14.997331  222878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-230403
	I1129 09:20:15.036227  222878 main.go:143] libmachine: Using SSH client type: native
	I1129 09:20:15.036638  222878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1129 09:20:15.036651  222878 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:20:15.042876  222878 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1129 09:20:17.636479  219229 addons.go:530] duration metric: took 2.104720222s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 09:20:17.699584  219229 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-071895" context rescaled to 1 replicas
	W1129 09:20:19.201224  219229 node_ready.go:57] node "old-k8s-version-071895" has "Ready":"False" status (will retry)
	I1129 09:20:18.208511  222878 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-230403
	
	I1129 09:20:18.208576  222878 ubuntu.go:182] provisioning hostname "no-preload-230403"
	I1129 09:20:18.208750  222878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-230403
	I1129 09:20:18.231955  222878 main.go:143] libmachine: Using SSH client type: native
	I1129 09:20:18.232303  222878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1129 09:20:18.232314  222878 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-230403 && echo "no-preload-230403" | sudo tee /etc/hostname
	I1129 09:20:18.417308  222878 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-230403
	
	I1129 09:20:18.417502  222878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-230403
	I1129 09:20:18.446833  222878 main.go:143] libmachine: Using SSH client type: native
	I1129 09:20:18.447196  222878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1129 09:20:18.447217  222878 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-230403' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-230403/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-230403' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:20:18.609294  222878 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:20:18.609323  222878 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-2317/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-2317/.minikube}
	I1129 09:20:18.609357  222878 ubuntu.go:190] setting up certificates
	I1129 09:20:18.609367  222878 provision.go:84] configureAuth start
	I1129 09:20:18.609424  222878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-230403
	I1129 09:20:18.633658  222878 provision.go:143] copyHostCerts
	I1129 09:20:18.633724  222878 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem, removing ...
	I1129 09:20:18.633733  222878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem
	I1129 09:20:18.633804  222878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem (1082 bytes)
	I1129 09:20:18.633884  222878 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem, removing ...
	I1129 09:20:18.633890  222878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem
	I1129 09:20:18.633917  222878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem (1123 bytes)
	I1129 09:20:18.633975  222878 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem, removing ...
	I1129 09:20:18.633979  222878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem
	I1129 09:20:18.634022  222878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem (1679 bytes)
	I1129 09:20:18.634072  222878 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem org=jenkins.no-preload-230403 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-230403]
	I1129 09:20:18.830643  222878 provision.go:177] copyRemoteCerts
	I1129 09:20:18.830732  222878 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:20:18.830804  222878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-230403
	I1129 09:20:18.849046  222878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/no-preload-230403/id_rsa Username:docker}
	I1129 09:20:18.957503  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 09:20:18.982683  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:20:19.017142  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 09:20:19.036354  222878 provision.go:87] duration metric: took 426.964935ms to configureAuth
	I1129 09:20:19.036391  222878 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:20:19.036594  222878 config.go:182] Loaded profile config "no-preload-230403": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:20:19.036608  222878 machine.go:97] duration metric: took 4.039383275s to provisionDockerMachine
	I1129 09:20:19.036705  222878 client.go:176] duration metric: took 5.829342348s to LocalClient.Create
	I1129 09:20:19.036723  222878 start.go:167] duration metric: took 5.829433418s to libmachine.API.Create "no-preload-230403"
	I1129 09:20:19.036733  222878 start.go:293] postStartSetup for "no-preload-230403" (driver="docker")
	I1129 09:20:19.036744  222878 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:20:19.036810  222878 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:20:19.036863  222878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-230403
	I1129 09:20:19.054558  222878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/no-preload-230403/id_rsa Username:docker}
	I1129 09:20:19.161154  222878 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:20:19.165056  222878 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:20:19.165086  222878 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:20:19.165116  222878 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-2317/.minikube/addons for local assets ...
	I1129 09:20:19.165196  222878 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-2317/.minikube/files for local assets ...
	I1129 09:20:19.165294  222878 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem -> 41372.pem in /etc/ssl/certs
	I1129 09:20:19.165459  222878 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:20:19.175008  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem --> /etc/ssl/certs/41372.pem (1708 bytes)
	I1129 09:20:19.202166  222878 start.go:296] duration metric: took 165.419871ms for postStartSetup
	I1129 09:20:19.202535  222878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-230403
	I1129 09:20:19.222107  222878 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/config.json ...
	I1129 09:20:19.222396  222878 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:20:19.222436  222878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-230403
	I1129 09:20:19.240201  222878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/no-preload-230403/id_rsa Username:docker}
	I1129 09:20:19.346358  222878 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:20:19.351907  222878 start.go:128] duration metric: took 6.150146246s to createHost
	I1129 09:20:19.351975  222878 start.go:83] releasing machines lock for "no-preload-230403", held for 6.150337057s
	I1129 09:20:19.352082  222878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-230403
	I1129 09:20:19.369647  222878 ssh_runner.go:195] Run: cat /version.json
	I1129 09:20:19.369701  222878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-230403
	I1129 09:20:19.369794  222878 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:20:19.369854  222878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-230403
	I1129 09:20:19.412764  222878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/no-preload-230403/id_rsa Username:docker}
	I1129 09:20:19.422423  222878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/no-preload-230403/id_rsa Username:docker}
	I1129 09:20:19.618519  222878 ssh_runner.go:195] Run: systemctl --version
	I1129 09:20:19.626187  222878 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:20:19.630590  222878 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:20:19.630681  222878 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:20:19.659536  222878 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1129 09:20:19.659559  222878 start.go:496] detecting cgroup driver to use...
	I1129 09:20:19.659594  222878 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 09:20:19.659644  222878 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1129 09:20:19.675641  222878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1129 09:20:19.690722  222878 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:20:19.690795  222878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:20:19.710602  222878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:20:19.735104  222878 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:20:19.862098  222878 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:20:20.020548  222878 docker.go:234] disabling docker service ...
	I1129 09:20:20.020764  222878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:20:20.049579  222878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:20:20.066560  222878 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:20:20.195869  222878 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:20:20.317681  222878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:20:20.332092  222878 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:20:20.348128  222878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1129 09:20:20.359261  222878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1129 09:20:20.369657  222878 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1129 09:20:20.369726  222878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1129 09:20:20.379235  222878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:20:20.388089  222878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1129 09:20:20.397442  222878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:20:20.406391  222878 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:20:20.414674  222878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1129 09:20:20.423896  222878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1129 09:20:20.432684  222878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1129 09:20:20.441584  222878 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:20:20.449626  222878 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:20:20.458580  222878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:20:20.578649  222878 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1129 09:20:20.669910  222878 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1129 09:20:20.670001  222878 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1129 09:20:20.674049  222878 start.go:564] Will wait 60s for crictl version
	I1129 09:20:20.674121  222878 ssh_runner.go:195] Run: which crictl
	I1129 09:20:20.677882  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:20:20.711552  222878 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1129 09:20:20.711620  222878 ssh_runner.go:195] Run: containerd --version
	I1129 09:20:20.734338  222878 ssh_runner.go:195] Run: containerd --version
	I1129 09:20:20.760452  222878 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1129 09:20:20.763394  222878 cli_runner.go:164] Run: docker network inspect no-preload-230403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:20:20.779886  222878 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1129 09:20:20.783617  222878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:20:20.793588  222878 kubeadm.go:884] updating cluster {Name:no-preload-230403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-230403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:20:20.793740  222878 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:20:20.793820  222878 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:20:20.818996  222878 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1129 09:20:20.819021  222878 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1129 09:20:20.819075  222878 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:20:20.819290  222878 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:20:20.819377  222878 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:20:20.819472  222878 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:20:20.819580  222878 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:20:20.819670  222878 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1129 09:20:20.819757  222878 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1129 09:20:20.819836  222878 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:20:20.820993  222878 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:20:20.821570  222878 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:20:20.821829  222878 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:20:20.821983  222878 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:20:20.822235  222878 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1129 09:20:20.822385  222878 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:20:20.822667  222878 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1129 09:20:20.823079  222878 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:20:21.122603  222878 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1129 09:20:21.122681  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1129 09:20:21.142272  222878 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a"
	I1129 09:20:21.142372  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:20:21.156765  222878 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0"
	I1129 09:20:21.156842  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:20:21.158253  222878 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196"
	I1129 09:20:21.158318  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:20:21.159304  222878 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc"
	I1129 09:20:21.159366  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:20:21.163083  222878 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e"
	I1129 09:20:21.163151  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1129 09:20:21.163275  222878 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9"
	I1129 09:20:21.163342  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:20:21.165618  222878 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1129 09:20:21.165704  222878 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1129 09:20:21.165791  222878 ssh_runner.go:195] Run: which crictl
	I1129 09:20:21.179345  222878 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1129 09:20:21.179432  222878 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:20:21.179520  222878 ssh_runner.go:195] Run: which crictl
	I1129 09:20:21.225665  222878 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1129 09:20:21.225755  222878 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:20:21.225854  222878 ssh_runner.go:195] Run: which crictl
	I1129 09:20:21.225939  222878 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1129 09:20:21.225991  222878 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:20:21.226032  222878 ssh_runner.go:195] Run: which crictl
	I1129 09:20:21.226126  222878 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1129 09:20:21.226162  222878 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:20:21.226209  222878 ssh_runner.go:195] Run: which crictl
	I1129 09:20:21.237496  222878 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1129 09:20:21.237581  222878 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1129 09:20:21.237665  222878 ssh_runner.go:195] Run: which crictl
	I1129 09:20:21.239070  222878 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1129 09:20:21.239288  222878 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:20:21.239346  222878 ssh_runner.go:195] Run: which crictl
	I1129 09:20:21.239286  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:20:21.239244  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 09:20:21.240343  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:20:21.240430  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:20:21.240578  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:20:21.248302  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 09:20:21.337972  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:20:21.338141  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 09:20:21.338156  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:20:21.350334  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:20:21.350500  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:20:21.350586  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:20:21.350679  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 09:20:21.436779  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:20:21.436931  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:20:21.437008  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 09:20:21.482969  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:20:21.483085  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:20:21.483137  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:20:21.491181  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 09:20:21.551573  222878 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1129 09:20:21.551783  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1129 09:20:21.551782  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:20:21.551677  222878 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1129 09:20:21.551991  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 09:20:21.589991  222878 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1129 09:20:21.590095  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1129 09:20:21.590176  222878 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1129 09:20:21.590233  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 09:20:21.590311  222878 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1129 09:20:21.590381  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 09:20:21.599084  222878 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1129 09:20:21.599203  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1129 09:20:21.606906  222878 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1129 09:20:21.607120  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1129 09:20:21.607120  222878 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1129 09:20:21.607245  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1129 09:20:21.607065  222878 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1129 09:20:21.607080  222878 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1129 09:20:21.607377  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1129 09:20:21.607089  222878 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1129 09:20:21.607470  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1129 09:20:21.607010  222878 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1129 09:20:21.607558  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1129 09:20:21.607693  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 09:20:21.611409  222878 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1129 09:20:21.611475  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1129 09:20:21.621627  222878 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1129 09:20:21.621809  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1129 09:20:21.715246  222878 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1129 09:20:21.715371  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1129 09:20:22.049743  222878 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1129 09:20:22.146786  222878 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 09:20:22.146909  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	W1129 09:20:22.239238  222878 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1129 09:20:22.239372  222878 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1129 09:20:22.239461  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	W1129 09:20:21.201342  219229 node_ready.go:57] node "old-k8s-version-071895" has "Ready":"False" status (will retry)
	W1129 09:20:23.202246  219229 node_ready.go:57] node "old-k8s-version-071895" has "Ready":"False" status (will retry)
	I1129 09:20:23.813839  222878 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.666881209s)
	I1129 09:20:23.813866  222878 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1129 09:20:23.813884  222878 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1129 09:20:23.813934  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1129 09:20:23.813990  222878 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.574510089s)
	I1129 09:20:23.814059  222878 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1129 09:20:23.814109  222878 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:20:23.814162  222878 ssh_runner.go:195] Run: which crictl
	I1129 09:20:25.262220  222878 ssh_runner.go:235] Completed: which crictl: (1.448029919s)
	I1129 09:20:25.262315  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:20:25.262227  222878 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.44826357s)
	I1129 09:20:25.262380  222878 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1129 09:20:25.262400  222878 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 09:20:25.262443  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 09:20:26.253409  222878 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1129 09:20:26.253448  222878 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 09:20:26.253502  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 09:20:26.253588  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:20:27.306910  222878 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.053379529s)
	I1129 09:20:27.306932  222878 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1129 09:20:27.306934  222878 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.053324259s)
	I1129 09:20:27.306948  222878 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 09:20:27.306998  222878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:20:27.306998  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 09:20:27.339643  222878 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1129 09:20:27.339756  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	W1129 09:20:25.701399  219229 node_ready.go:57] node "old-k8s-version-071895" has "Ready":"False" status (will retry)
	W1129 09:20:28.200255  219229 node_ready.go:57] node "old-k8s-version-071895" has "Ready":"False" status (will retry)
	I1129 09:20:29.701513  219229 node_ready.go:49] node "old-k8s-version-071895" is "Ready"
	I1129 09:20:29.701545  219229 node_ready.go:38] duration metric: took 12.504000526s for node "old-k8s-version-071895" to be "Ready" ...
	I1129 09:20:29.701560  219229 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:20:29.701622  219229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:20:29.719485  219229 api_server.go:72] duration metric: took 14.188022937s to wait for apiserver process to appear ...
	I1129 09:20:29.719511  219229 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:20:29.719530  219229 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:20:29.736520  219229 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 09:20:29.740376  219229 api_server.go:141] control plane version: v1.28.0
	I1129 09:20:29.740411  219229 api_server.go:131] duration metric: took 20.892436ms to wait for apiserver health ...
	I1129 09:20:29.740421  219229 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:20:29.748136  219229 system_pods.go:59] 8 kube-system pods found
	I1129 09:20:29.748178  219229 system_pods.go:61] "coredns-5dd5756b68-htmzr" [c6b5f2ee-df4f-40a3-be2e-6f16e858e497] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:20:29.748186  219229 system_pods.go:61] "etcd-old-k8s-version-071895" [79f6e3b1-4d0e-480f-ba81-e9c28edc83ac] Running
	I1129 09:20:29.748192  219229 system_pods.go:61] "kindnet-58g5f" [d4743cee-0834-4a44-9cf7-d0228aa5b843] Running
	I1129 09:20:29.748201  219229 system_pods.go:61] "kube-apiserver-old-k8s-version-071895" [81748b80-7ec0-4a82-b646-673534a05137] Running
	I1129 09:20:29.748206  219229 system_pods.go:61] "kube-controller-manager-old-k8s-version-071895" [b6691622-dfbd-4b77-bedd-c7a97120a360] Running
	I1129 09:20:29.748209  219229 system_pods.go:61] "kube-proxy-4jxrn" [3e4bdb82-85e5-468b-80dc-0481c990f117] Running
	I1129 09:20:29.748213  219229 system_pods.go:61] "kube-scheduler-old-k8s-version-071895" [fe7f98e1-0743-41d8-869a-4807c081f621] Running
	I1129 09:20:29.748219  219229 system_pods.go:61] "storage-provisioner" [784fe707-ae15-4eae-a70c-ec084ce3d812] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:20:29.748231  219229 system_pods.go:74] duration metric: took 7.804151ms to wait for pod list to return data ...
	I1129 09:20:29.748241  219229 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:20:29.751107  219229 default_sa.go:45] found service account: "default"
	I1129 09:20:29.751135  219229 default_sa.go:55] duration metric: took 2.887312ms for default service account to be created ...
	I1129 09:20:29.751147  219229 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:20:29.757754  219229 system_pods.go:86] 8 kube-system pods found
	I1129 09:20:29.757797  219229 system_pods.go:89] "coredns-5dd5756b68-htmzr" [c6b5f2ee-df4f-40a3-be2e-6f16e858e497] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:20:29.757804  219229 system_pods.go:89] "etcd-old-k8s-version-071895" [79f6e3b1-4d0e-480f-ba81-e9c28edc83ac] Running
	I1129 09:20:29.757810  219229 system_pods.go:89] "kindnet-58g5f" [d4743cee-0834-4a44-9cf7-d0228aa5b843] Running
	I1129 09:20:29.757815  219229 system_pods.go:89] "kube-apiserver-old-k8s-version-071895" [81748b80-7ec0-4a82-b646-673534a05137] Running
	I1129 09:20:29.757819  219229 system_pods.go:89] "kube-controller-manager-old-k8s-version-071895" [b6691622-dfbd-4b77-bedd-c7a97120a360] Running
	I1129 09:20:29.757823  219229 system_pods.go:89] "kube-proxy-4jxrn" [3e4bdb82-85e5-468b-80dc-0481c990f117] Running
	I1129 09:20:29.757827  219229 system_pods.go:89] "kube-scheduler-old-k8s-version-071895" [fe7f98e1-0743-41d8-869a-4807c081f621] Running
	I1129 09:20:29.757833  219229 system_pods.go:89] "storage-provisioner" [784fe707-ae15-4eae-a70c-ec084ce3d812] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:20:29.757863  219229 retry.go:31] will retry after 212.604223ms: missing components: kube-dns
	I1129 09:20:29.976302  219229 system_pods.go:86] 8 kube-system pods found
	I1129 09:20:29.976339  219229 system_pods.go:89] "coredns-5dd5756b68-htmzr" [c6b5f2ee-df4f-40a3-be2e-6f16e858e497] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:20:29.976347  219229 system_pods.go:89] "etcd-old-k8s-version-071895" [79f6e3b1-4d0e-480f-ba81-e9c28edc83ac] Running
	I1129 09:20:29.976353  219229 system_pods.go:89] "kindnet-58g5f" [d4743cee-0834-4a44-9cf7-d0228aa5b843] Running
	I1129 09:20:29.976359  219229 system_pods.go:89] "kube-apiserver-old-k8s-version-071895" [81748b80-7ec0-4a82-b646-673534a05137] Running
	I1129 09:20:29.976364  219229 system_pods.go:89] "kube-controller-manager-old-k8s-version-071895" [b6691622-dfbd-4b77-bedd-c7a97120a360] Running
	I1129 09:20:29.976368  219229 system_pods.go:89] "kube-proxy-4jxrn" [3e4bdb82-85e5-468b-80dc-0481c990f117] Running
	I1129 09:20:29.976373  219229 system_pods.go:89] "kube-scheduler-old-k8s-version-071895" [fe7f98e1-0743-41d8-869a-4807c081f621] Running
	I1129 09:20:29.976379  219229 system_pods.go:89] "storage-provisioner" [784fe707-ae15-4eae-a70c-ec084ce3d812] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:20:29.976398  219229 retry.go:31] will retry after 279.278138ms: missing components: kube-dns
	I1129 09:20:30.268579  219229 system_pods.go:86] 8 kube-system pods found
	I1129 09:20:30.268774  219229 system_pods.go:89] "coredns-5dd5756b68-htmzr" [c6b5f2ee-df4f-40a3-be2e-6f16e858e497] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:20:30.268790  219229 system_pods.go:89] "etcd-old-k8s-version-071895" [79f6e3b1-4d0e-480f-ba81-e9c28edc83ac] Running
	I1129 09:20:30.268797  219229 system_pods.go:89] "kindnet-58g5f" [d4743cee-0834-4a44-9cf7-d0228aa5b843] Running
	I1129 09:20:30.268802  219229 system_pods.go:89] "kube-apiserver-old-k8s-version-071895" [81748b80-7ec0-4a82-b646-673534a05137] Running
	I1129 09:20:30.268807  219229 system_pods.go:89] "kube-controller-manager-old-k8s-version-071895" [b6691622-dfbd-4b77-bedd-c7a97120a360] Running
	I1129 09:20:30.268811  219229 system_pods.go:89] "kube-proxy-4jxrn" [3e4bdb82-85e5-468b-80dc-0481c990f117] Running
	I1129 09:20:30.268816  219229 system_pods.go:89] "kube-scheduler-old-k8s-version-071895" [fe7f98e1-0743-41d8-869a-4807c081f621] Running
	I1129 09:20:30.268826  219229 system_pods.go:89] "storage-provisioner" [784fe707-ae15-4eae-a70c-ec084ce3d812] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:20:30.268843  219229 retry.go:31] will retry after 368.451427ms: missing components: kube-dns
	I1129 09:20:30.642681  219229 system_pods.go:86] 8 kube-system pods found
	I1129 09:20:30.642718  219229 system_pods.go:89] "coredns-5dd5756b68-htmzr" [c6b5f2ee-df4f-40a3-be2e-6f16e858e497] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:20:30.642726  219229 system_pods.go:89] "etcd-old-k8s-version-071895" [79f6e3b1-4d0e-480f-ba81-e9c28edc83ac] Running
	I1129 09:20:30.642733  219229 system_pods.go:89] "kindnet-58g5f" [d4743cee-0834-4a44-9cf7-d0228aa5b843] Running
	I1129 09:20:30.642738  219229 system_pods.go:89] "kube-apiserver-old-k8s-version-071895" [81748b80-7ec0-4a82-b646-673534a05137] Running
	I1129 09:20:30.642743  219229 system_pods.go:89] "kube-controller-manager-old-k8s-version-071895" [b6691622-dfbd-4b77-bedd-c7a97120a360] Running
	I1129 09:20:30.642747  219229 system_pods.go:89] "kube-proxy-4jxrn" [3e4bdb82-85e5-468b-80dc-0481c990f117] Running
	I1129 09:20:30.642752  219229 system_pods.go:89] "kube-scheduler-old-k8s-version-071895" [fe7f98e1-0743-41d8-869a-4807c081f621] Running
	I1129 09:20:30.642761  219229 system_pods.go:89] "storage-provisioner" [784fe707-ae15-4eae-a70c-ec084ce3d812] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:20:30.642776  219229 retry.go:31] will retry after 521.296683ms: missing components: kube-dns
	I1129 09:20:31.171413  219229 system_pods.go:86] 8 kube-system pods found
	I1129 09:20:31.171442  219229 system_pods.go:89] "coredns-5dd5756b68-htmzr" [c6b5f2ee-df4f-40a3-be2e-6f16e858e497] Running
	I1129 09:20:31.171449  219229 system_pods.go:89] "etcd-old-k8s-version-071895" [79f6e3b1-4d0e-480f-ba81-e9c28edc83ac] Running
	I1129 09:20:31.171454  219229 system_pods.go:89] "kindnet-58g5f" [d4743cee-0834-4a44-9cf7-d0228aa5b843] Running
	I1129 09:20:31.171472  219229 system_pods.go:89] "kube-apiserver-old-k8s-version-071895" [81748b80-7ec0-4a82-b646-673534a05137] Running
	I1129 09:20:31.171482  219229 system_pods.go:89] "kube-controller-manager-old-k8s-version-071895" [b6691622-dfbd-4b77-bedd-c7a97120a360] Running
	I1129 09:20:31.171487  219229 system_pods.go:89] "kube-proxy-4jxrn" [3e4bdb82-85e5-468b-80dc-0481c990f117] Running
	I1129 09:20:31.171502  219229 system_pods.go:89] "kube-scheduler-old-k8s-version-071895" [fe7f98e1-0743-41d8-869a-4807c081f621] Running
	I1129 09:20:31.171506  219229 system_pods.go:89] "storage-provisioner" [784fe707-ae15-4eae-a70c-ec084ce3d812] Running
	I1129 09:20:31.171514  219229 system_pods.go:126] duration metric: took 1.420361927s to wait for k8s-apps to be running ...
	I1129 09:20:31.171522  219229 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:20:31.171578  219229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:20:31.191104  219229 system_svc.go:56] duration metric: took 19.570105ms WaitForService to wait for kubelet
	I1129 09:20:31.191198  219229 kubeadm.go:587] duration metric: took 15.659726511s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:20:31.191233  219229 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:20:31.194404  219229 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 09:20:31.194485  219229 node_conditions.go:123] node cpu capacity is 2
	I1129 09:20:31.194514  219229 node_conditions.go:105] duration metric: took 3.245952ms to run NodePressure ...
	I1129 09:20:31.194558  219229 start.go:242] waiting for startup goroutines ...
	I1129 09:20:31.194583  219229 start.go:247] waiting for cluster config update ...
	I1129 09:20:31.194611  219229 start.go:256] writing updated cluster config ...
	I1129 09:20:31.195146  219229 ssh_runner.go:195] Run: rm -f paused
	I1129 09:20:31.201208  219229 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:20:31.206616  219229 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-htmzr" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:31.217168  219229 pod_ready.go:94] pod "coredns-5dd5756b68-htmzr" is "Ready"
	I1129 09:20:31.217243  219229 pod_ready.go:86] duration metric: took 10.548708ms for pod "coredns-5dd5756b68-htmzr" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:31.223645  219229 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-071895" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:31.234784  219229 pod_ready.go:94] pod "etcd-old-k8s-version-071895" is "Ready"
	I1129 09:20:31.234859  219229 pod_ready.go:86] duration metric: took 11.131317ms for pod "etcd-old-k8s-version-071895" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:31.248582  219229 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-071895" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:31.259407  219229 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-071895" is "Ready"
	I1129 09:20:31.259482  219229 pod_ready.go:86] duration metric: took 10.819537ms for pod "kube-apiserver-old-k8s-version-071895" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:31.263998  219229 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-071895" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:31.606531  219229 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-071895" is "Ready"
	I1129 09:20:31.606610  219229 pod_ready.go:86] duration metric: took 342.539937ms for pod "kube-controller-manager-old-k8s-version-071895" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:31.808005  219229 pod_ready.go:83] waiting for pod "kube-proxy-4jxrn" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:32.206161  219229 pod_ready.go:94] pod "kube-proxy-4jxrn" is "Ready"
	I1129 09:20:32.206190  219229 pod_ready.go:86] duration metric: took 398.137324ms for pod "kube-proxy-4jxrn" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:32.422404  219229 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-071895" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:32.806577  219229 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-071895" is "Ready"
	I1129 09:20:32.806676  219229 pod_ready.go:86] duration metric: took 384.18875ms for pod "kube-scheduler-old-k8s-version-071895" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:20:32.806706  219229 pod_ready.go:40] duration metric: took 1.605412666s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:20:32.883122  219229 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1129 09:20:32.886925  219229 out.go:203] 
	W1129 09:20:32.889873  219229 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1129 09:20:32.892945  219229 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1129 09:20:32.896883  219229 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-071895" cluster and "default" namespace by default
	I1129 09:20:28.381724  222878 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.074642707s)
	I1129 09:20:28.381753  222878 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1129 09:20:28.381780  222878 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1129 09:20:28.381828  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1129 09:20:28.381907  222878 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.042136021s)
	I1129 09:20:28.381924  222878 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1129 09:20:28.381944  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1129 09:20:31.974151  222878 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (3.592291332s)
	I1129 09:20:31.974192  222878 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1129 09:20:31.974218  222878 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1129 09:20:31.974299  222878 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1129 09:20:32.697903  222878 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-2317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1129 09:20:32.697943  222878 cache_images.go:125] Successfully loaded all cached images
	I1129 09:20:32.697949  222878 cache_images.go:94] duration metric: took 11.878914483s to LoadCachedImages
	I1129 09:20:32.697961  222878 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1129 09:20:32.698052  222878 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-230403 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-230403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:20:32.698117  222878 ssh_runner.go:195] Run: sudo crictl info
	I1129 09:20:32.724003  222878 cni.go:84] Creating CNI manager for ""
	I1129 09:20:32.724023  222878 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:20:32.724042  222878 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:20:32.724064  222878 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-230403 NodeName:no-preload-230403 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:20:32.724177  222878 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-230403"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:20:32.724247  222878 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:20:32.734586  222878 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1129 09:20:32.734661  222878 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1129 09:20:32.744055  222878 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1129 09:20:32.744148  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1129 09:20:32.744244  222878 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256
	I1129 09:20:32.744287  222878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:20:32.744372  222878 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256
	I1129 09:20:32.744422  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1129 09:20:32.765160  222878 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1129 09:20:32.765194  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1129 09:20:32.765213  222878 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1129 09:20:32.765239  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1129 09:20:32.765317  222878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1129 09:20:32.779265  222878 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1129 09:20:32.779306  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1129 09:20:33.994121  222878 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:20:34.006964  222878 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1129 09:20:34.022992  222878 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:20:34.039936  222878 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1129 09:20:34.054478  222878 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:20:34.059158  222878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:20:34.071443  222878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:20:34.198077  222878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:20:34.225128  222878 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403 for IP: 192.168.85.2
	I1129 09:20:34.225153  222878 certs.go:195] generating shared ca certs ...
	I1129 09:20:34.225176  222878 certs.go:227] acquiring lock for ca certs: {Name:mke655c14945a8520f2f9de36531df923afb2bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:20:34.225330  222878 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-2317/.minikube/ca.key
	I1129 09:20:34.225385  222878 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.key
	I1129 09:20:34.225397  222878 certs.go:257] generating profile certs ...
	I1129 09:20:34.225460  222878 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.key
	I1129 09:20:34.225477  222878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt with IP's: []
	I1129 09:20:34.561780  222878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt ...
	I1129 09:20:34.561812  222878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt: {Name:mk0506510be8624c61cf78aca5533a42dbe12049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:20:34.562018  222878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.key ...
	I1129 09:20:34.562032  222878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.key: {Name:mk7728838f62624078d9f102edcc2e7e92fca24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:20:34.562134  222878 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.key.9c37d96b
	I1129 09:20:34.562155  222878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.crt.9c37d96b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1129 09:20:35.279064  222878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.crt.9c37d96b ...
	I1129 09:20:35.279097  222878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.crt.9c37d96b: {Name:mkb8ab5f6d41eda35913c9ea362db6a34366a395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:20:35.279295  222878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.key.9c37d96b ...
	I1129 09:20:35.279312  222878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.key.9c37d96b: {Name:mk21caee54335560e86fdf60eec601c387bdb604 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:20:35.279403  222878 certs.go:382] copying /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.crt.9c37d96b -> /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.crt
	I1129 09:20:35.279483  222878 certs.go:386] copying /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.key.9c37d96b -> /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.key
	I1129 09:20:35.279555  222878 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/proxy-client.key
	I1129 09:20:35.279573  222878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/proxy-client.crt with IP's: []
	I1129 09:20:35.662938  222878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/proxy-client.crt ...
	I1129 09:20:35.662968  222878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/proxy-client.crt: {Name:mk84c114a546c4abdb7a044023d46a90cfce8d04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:20:35.663145  222878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/proxy-client.key ...
	I1129 09:20:35.663161  222878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/proxy-client.key: {Name:mk0fc11a967c87ab7d123db8f16798c3182082c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:20:35.663352  222878 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137.pem (1338 bytes)
	W1129 09:20:35.663398  222878 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137_empty.pem, impossibly tiny 0 bytes
	I1129 09:20:35.663418  222878 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:20:35.663446  222878 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem (1082 bytes)
	I1129 09:20:35.663474  222878 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:20:35.663499  222878 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem (1679 bytes)
	I1129 09:20:35.663547  222878 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem (1708 bytes)
	I1129 09:20:35.664157  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:20:35.691460  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1129 09:20:35.717525  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:20:35.745851  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:20:35.769815  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 09:20:35.790501  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1129 09:20:35.812066  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:20:35.830915  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:20:35.849395  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137.pem --> /usr/share/ca-certificates/4137.pem (1338 bytes)
	I1129 09:20:35.872584  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem --> /usr/share/ca-certificates/41372.pem (1708 bytes)
	I1129 09:20:35.893049  222878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:20:35.918494  222878 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:20:35.936255  222878 ssh_runner.go:195] Run: openssl version
	I1129 09:20:35.943518  222878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41372.pem && ln -fs /usr/share/ca-certificates/41372.pem /etc/ssl/certs/41372.pem"
	I1129 09:20:35.954406  222878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41372.pem
	I1129 09:20:35.959997  222878 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:36 /usr/share/ca-certificates/41372.pem
	I1129 09:20:35.960085  222878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41372.pem
	I1129 09:20:36.006091  222878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41372.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:20:36.017475  222878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:20:36.027314  222878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:20:36.031927  222878 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:20:36.031999  222878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:20:36.075486  222878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:20:36.084604  222878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4137.pem && ln -fs /usr/share/ca-certificates/4137.pem /etc/ssl/certs/4137.pem"
	I1129 09:20:36.094214  222878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4137.pem
	I1129 09:20:36.098768  222878 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:36 /usr/share/ca-certificates/4137.pem
	I1129 09:20:36.098840  222878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4137.pem
	I1129 09:20:36.143207  222878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4137.pem /etc/ssl/certs/51391683.0"
	I1129 09:20:36.152425  222878 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:20:36.156708  222878 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:20:36.156761  222878 kubeadm.go:401] StartCluster: {Name:no-preload-230403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-230403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:20:36.156839  222878 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1129 09:20:36.156905  222878 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:20:36.184470  222878 cri.go:89] found id: ""
	I1129 09:20:36.184537  222878 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:20:36.193057  222878 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:20:36.201441  222878 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:20:36.201527  222878 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:20:36.210060  222878 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:20:36.210079  222878 kubeadm.go:158] found existing configuration files:
	
	I1129 09:20:36.210164  222878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:20:36.218503  222878 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:20:36.218590  222878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:20:36.226704  222878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:20:36.235392  222878 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:20:36.235519  222878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:20:36.243976  222878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:20:36.252727  222878 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:20:36.252802  222878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:20:36.261462  222878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:20:36.270714  222878 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:20:36.270782  222878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:20:36.278924  222878 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:20:36.329064  222878 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 09:20:36.329252  222878 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:20:36.365187  222878 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:20:36.365275  222878 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1129 09:20:36.365324  222878 kubeadm.go:319] OS: Linux
	I1129 09:20:36.365388  222878 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:20:36.365445  222878 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1129 09:20:36.365513  222878 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:20:36.365576  222878 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:20:36.365638  222878 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:20:36.365702  222878 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:20:36.365769  222878 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:20:36.365832  222878 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:20:36.365884  222878 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1129 09:20:36.435193  222878 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:20:36.435380  222878 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:20:36.435539  222878 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 09:20:36.441349  222878 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:20:36.446636  222878 out.go:252]   - Generating certificates and keys ...
	I1129 09:20:36.446799  222878 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:20:36.446906  222878 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:20:37.362846  222878 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 09:20:37.721165  222878 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:20:37.949639  222878 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:20:38.413017  222878 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:20:38.775660  222878 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:20:38.776186  222878 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-230403] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1129 09:20:39.104705  222878 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:20:39.105064  222878 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-230403] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1129 09:20:39.359331  222878 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:20:39.818423  222878 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:20:39.880381  222878 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:20:39.880638  222878 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:20:41.216161  222878 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:20:42.199207  222878 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 09:20:42.918813  222878 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:20:43.410581  222878 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:20:43.826978  222878 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:20:43.827675  222878 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:20:43.830453  222878 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:20:43.834084  222878 out.go:252]   - Booting up control plane ...
	I1129 09:20:43.834197  222878 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:20:43.834283  222878 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:20:43.834359  222878 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:20:43.851485  222878 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:20:43.851654  222878 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 09:20:43.861644  222878 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 09:20:43.863805  222878 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:20:43.864136  222878 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:20:44.015245  222878 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 09:20:44.015367  222878 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 09:20:45.517833  222878 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.502807558s
	I1129 09:20:45.522544  222878 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 09:20:45.522646  222878 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1129 09:20:45.522745  222878 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 09:20:45.522825  222878 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	b9e829b9abde5       1611cd07b61d5       12 seconds ago      Running             busybox                   0                   ddd79130415cc       busybox                                          default
	f8f1e6dc2605a       97e04611ad434       19 seconds ago      Running             coredns                   0                   0f3ce8e6c4105       coredns-5dd5756b68-htmzr                         kube-system
	359d9432ef497       ba04bb24b9575       20 seconds ago      Running             storage-provisioner       0                   66250dabca2c6       storage-provisioner                              kube-system
	db1d77c6c85ea       b1a8c6f707935       31 seconds ago      Running             kindnet-cni               0                   78bf9329ff249       kindnet-58g5f                                    kube-system
	000a8de26034d       940f54a5bcae9       33 seconds ago      Running             kube-proxy                0                   ec6c1087a251f       kube-proxy-4jxrn                                 kube-system
	c6e9c9ab04ae1       46cc66ccc7c19       55 seconds ago      Running             kube-controller-manager   0                   16b3e81e696c9       kube-controller-manager-old-k8s-version-071895   kube-system
	41dff26eb8e67       9cdd6470f48c8       55 seconds ago      Running             etcd                      0                   468f2a4d8c24a       etcd-old-k8s-version-071895                      kube-system
	d34a4ced6121d       00543d2fe5d71       55 seconds ago      Running             kube-apiserver            0                   9630ead47757e       kube-apiserver-old-k8s-version-071895            kube-system
	7c5e9c05d20b8       762dce4090c5f       55 seconds ago      Running             kube-scheduler            0                   676bacb96168a       kube-scheduler-old-k8s-version-071895            kube-system
	
	
	==> containerd <==
	Nov 29 09:20:29 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:29.985394384Z" level=info msg="connecting to shim 359d9432ef4979d387512d5a2a5a3cd9fb7a0987f4a3540a23407b70f7faf163" address="unix:///run/containerd/s/34373f541c51fce0619cd6b7f9bbe560b47e8c8788713a29595219a5d22d901b" protocol=ttrpc version=3
	Nov 29 09:20:29 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:29.992937298Z" level=info msg="CreateContainer within sandbox \"0f3ce8e6c41050910070bab1a2edce113b2eb3bd98f3bca1d8006c18bcd1714f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 29 09:20:30 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:30.071685071Z" level=info msg="Container f8f1e6dc2605a052d9e0af268d75e52d11eef09c6da328c174daa4346e21359d: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:20:30 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:30.095345483Z" level=info msg="CreateContainer within sandbox \"0f3ce8e6c41050910070bab1a2edce113b2eb3bd98f3bca1d8006c18bcd1714f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8f1e6dc2605a052d9e0af268d75e52d11eef09c6da328c174daa4346e21359d\""
	Nov 29 09:20:30 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:30.097739089Z" level=info msg="StartContainer for \"f8f1e6dc2605a052d9e0af268d75e52d11eef09c6da328c174daa4346e21359d\""
	Nov 29 09:20:30 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:30.099238569Z" level=info msg="connecting to shim f8f1e6dc2605a052d9e0af268d75e52d11eef09c6da328c174daa4346e21359d" address="unix:///run/containerd/s/3150843ad07ed5a21377bb0ba6fe93d3c73033d9ccfa3b4a9e0ed16a5e8438c5" protocol=ttrpc version=3
	Nov 29 09:20:30 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:30.189374834Z" level=info msg="StartContainer for \"359d9432ef4979d387512d5a2a5a3cd9fb7a0987f4a3540a23407b70f7faf163\" returns successfully"
	Nov 29 09:20:30 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:30.269744369Z" level=info msg="StartContainer for \"f8f1e6dc2605a052d9e0af268d75e52d11eef09c6da328c174daa4346e21359d\" returns successfully"
	Nov 29 09:20:35 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:35.534277133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3abcbd08-d7c4-4a13-b94c-6f6424975411,Namespace:default,Attempt:0,}"
	Nov 29 09:20:35 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:35.597130515Z" level=info msg="connecting to shim ddd79130415cc8649c69caccfc081affa5f1da8a2517127cdbcf8d824a791490" address="unix:///run/containerd/s/6d78da511a42142891dae64b3eb6a171701a2aacf243055415398ac4ec21cd7a" namespace=k8s.io protocol=ttrpc version=3
	Nov 29 09:20:35 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:35.703469012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3abcbd08-d7c4-4a13-b94c-6f6424975411,Namespace:default,Attempt:0,} returns sandbox id \"ddd79130415cc8649c69caccfc081affa5f1da8a2517127cdbcf8d824a791490\""
	Nov 29 09:20:35 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:35.712136437Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.805646978Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.808907002Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937188"
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.812726259Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.815034818Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.816002472Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.103636897s"
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.816153291Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.820479635Z" level=info msg="CreateContainer within sandbox \"ddd79130415cc8649c69caccfc081affa5f1da8a2517127cdbcf8d824a791490\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.837353396Z" level=info msg="Container b9e829b9abde5402e2cbe089579fccb3fcaa2d4225461d6d9fe9bceddbff0c20: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.849004626Z" level=info msg="CreateContainer within sandbox \"ddd79130415cc8649c69caccfc081affa5f1da8a2517127cdbcf8d824a791490\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"b9e829b9abde5402e2cbe089579fccb3fcaa2d4225461d6d9fe9bceddbff0c20\""
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.852339424Z" level=info msg="StartContainer for \"b9e829b9abde5402e2cbe089579fccb3fcaa2d4225461d6d9fe9bceddbff0c20\""
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.855127486Z" level=info msg="connecting to shim b9e829b9abde5402e2cbe089579fccb3fcaa2d4225461d6d9fe9bceddbff0c20" address="unix:///run/containerd/s/6d78da511a42142891dae64b3eb6a171701a2aacf243055415398ac4ec21cd7a" protocol=ttrpc version=3
	Nov 29 09:20:37 old-k8s-version-071895 containerd[758]: time="2025-11-29T09:20:37.962061310Z" level=info msg="StartContainer for \"b9e829b9abde5402e2cbe089579fccb3fcaa2d4225461d6d9fe9bceddbff0c20\" returns successfully"
	Nov 29 09:20:44 old-k8s-version-071895 containerd[758]: E1129 09:20:44.932672     758 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [f8f1e6dc2605a052d9e0af268d75e52d11eef09c6da328c174daa4346e21359d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51515 - 3634 "HINFO IN 3397046818821823914.8081764445601178770. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005882235s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-071895
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-071895
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=old-k8s-version-071895
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_20_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:19:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-071895
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:20:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:20:33 +0000   Sat, 29 Nov 2025 09:19:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:20:33 +0000   Sat, 29 Nov 2025 09:19:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:20:33 +0000   Sat, 29 Nov 2025 09:19:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:20:33 +0000   Sat, 29 Nov 2025 09:20:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-071895
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                453a3f46-be9b-4440-b54b-7bd5b2275c63
	  Boot ID:                    6647f078-4edd-40c5-9d0e-49eb5ed00bd7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 coredns-5dd5756b68-htmzr                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     35s
	  kube-system                 etcd-old-k8s-version-071895                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         47s
	  kube-system                 kindnet-58g5f                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      35s
	  kube-system                 kube-apiserver-old-k8s-version-071895             250m (12%)    0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 kube-controller-manager-old-k8s-version-071895    200m (10%)    0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-proxy-4jxrn                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-scheduler-old-k8s-version-071895             100m (5%)     0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 33s   kube-proxy       
	  Normal  Starting                 48s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  48s   kubelet          Node old-k8s-version-071895 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s   kubelet          Node old-k8s-version-071895 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s   kubelet          Node old-k8s-version-071895 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  47s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           36s   node-controller  Node old-k8s-version-071895 event: Registered Node old-k8s-version-071895 in Controller
	  Normal  NodeReady                21s   kubelet          Node old-k8s-version-071895 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014634] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.570975] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032231] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767655] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.282538] kauditd_printk_skb: 36 callbacks suppressed
	[Nov29 08:39] FS-Cache: Duplicate cookie detected
	[  +0.000727] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001077] FS-Cache: O-cookie d=00000000b08097f7{9P.session} n=00000000a17ba85f
	[  +0.001074] FS-Cache: O-key=[10] '34323935323231393134'
	[  +0.000776] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000954] FS-Cache: N-cookie d=00000000b08097f7{9P.session} n=00000000534469ad
	[  +0.001092] FS-Cache: N-key=[10] '34323935323231393134'
	[Nov29 09:19] hrtimer: interrupt took 12545193 ns
	
	
	==> etcd [41dff26eb8e679cc29a87f83f59d117073bdaeb9ac41cb8ac8ee1cb32c92511a] <==
	{"level":"info","ts":"2025-11-29T09:19:54.897611Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-29T09:19:54.901566Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-29T09:19:54.901625Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-29T09:19:55.060661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-29T09:19:55.060785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-29T09:19:55.060882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-29T09:19:55.060949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-29T09:19:55.060985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-29T09:19:55.061056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-29T09:19:55.06113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-29T09:19:55.062447Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-071895 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-29T09:19:55.062536Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-29T09:19:55.063797Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-29T09:19:55.063991Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:19:55.065852Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-29T09:19:55.066951Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-29T09:19:55.067534Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:19:55.067717Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:19:55.071793Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:19:55.071959Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-29T09:19:55.072006Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-29T09:20:15.052803Z","caller":"traceutil/trace.go:171","msg":"trace[25407896] transaction","detail":"{read_only:false; response_revision:297; number_of_response:1; }","duration":"106.818617ms","start":"2025-11-29T09:20:14.945956Z","end":"2025-11-29T09:20:15.052774Z","steps":["trace[25407896] 'process raft request'  (duration: 106.616925ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:20:15.064957Z","caller":"traceutil/trace.go:171","msg":"trace[1542162002] transaction","detail":"{read_only:false; response_revision:300; number_of_response:1; }","duration":"106.599802ms","start":"2025-11-29T09:20:14.95834Z","end":"2025-11-29T09:20:15.064939Z","steps":["trace[1542162002] 'process raft request'  (duration: 106.563165ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:20:15.065342Z","caller":"traceutil/trace.go:171","msg":"trace[758518492] transaction","detail":"{read_only:false; response_revision:298; number_of_response:1; }","duration":"119.137568ms","start":"2025-11-29T09:20:14.946194Z","end":"2025-11-29T09:20:15.065332Z","steps":["trace[758518492] 'process raft request'  (duration: 118.584375ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:20:15.065438Z","caller":"traceutil/trace.go:171","msg":"trace[2009828336] transaction","detail":"{read_only:false; response_revision:299; number_of_response:1; }","duration":"112.325548ms","start":"2025-11-29T09:20:14.953105Z","end":"2025-11-29T09:20:15.065431Z","steps":["trace[2009828336] 'process raft request'  (duration: 111.76593ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:20:50 up  1:03,  0 user,  load average: 3.24, 2.71, 2.63
	Linux old-k8s-version-071895 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [db1d77c6c85eaf5ebd7dc839fb54d40271ee80c34795b249a47534f35c064f1c] <==
	I1129 09:20:19.083145       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:20:19.083520       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 09:20:19.083647       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:20:19.083659       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:20:19.083671       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:20:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:20:19.286160       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:20:19.286239       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:20:19.286373       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:20:19.287882       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:20:19.580767       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:20:19.580802       1 metrics.go:72] Registering metrics
	I1129 09:20:19.580865       1 controller.go:711] "Syncing nftables rules"
	I1129 09:20:29.287220       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:20:29.287264       1 main.go:301] handling current node
	I1129 09:20:39.286004       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:20:39.286281       1 main.go:301] handling current node
	I1129 09:20:49.294522       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:20:49.294558       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d34a4ced6121deea5f0e58655a9a45e86fccdde412c9acf3d1e35ab330cd1b4b] <==
	I1129 09:19:58.687723       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1129 09:19:58.689876       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1129 09:19:58.689902       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1129 09:19:58.690079       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1129 09:19:58.691126       1 aggregator.go:166] initial CRD sync complete...
	I1129 09:19:58.691143       1 autoregister_controller.go:141] Starting autoregister controller
	I1129 09:19:58.691150       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 09:19:58.691158       1 cache.go:39] Caches are synced for autoregister controller
	E1129 09:19:58.752402       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I1129 09:19:58.885509       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:19:59.184340       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:19:59.194717       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:19:59.195065       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:20:00.545658       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:20:00.693098       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:20:00.863619       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:20:00.877801       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1129 09:20:00.879300       1 controller.go:624] quota admission added evaluator for: endpoints
	I1129 09:20:00.885677       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:20:01.758115       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1129 09:20:02.382429       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1129 09:20:02.396930       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:20:02.411199       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1129 09:20:15.297358       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1129 09:20:15.463834       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [c6e9c9ab04ae16e634fbb9b4e1d16587356b43ecc4799412da2e56e79409870b] <==
	I1129 09:20:15.111764       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-old-k8s-version-071895" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1129 09:20:15.174792       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-071895" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1129 09:20:15.320980       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1129 09:20:15.351255       1 shared_informer.go:318] Caches are synced for garbage collector
	I1129 09:20:15.351286       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1129 09:20:15.384164       1 shared_informer.go:318] Caches are synced for garbage collector
	I1129 09:20:15.486462       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4jxrn"
	I1129 09:20:15.486489       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-58g5f"
	I1129 09:20:15.643761       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-rk2xx"
	I1129 09:20:15.661237       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-htmzr"
	I1129 09:20:15.701868       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="389.914526ms"
	I1129 09:20:15.744722       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.452889ms"
	I1129 09:20:15.746651       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.963µs"
	I1129 09:20:17.246540       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1129 09:20:17.300225       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-rk2xx"
	I1129 09:20:17.312673       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.947307ms"
	I1129 09:20:17.322261       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.52965ms"
	I1129 09:20:17.323333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="806.512µs"
	I1129 09:20:29.431259       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.143µs"
	I1129 09:20:29.490111       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="119.681µs"
	I1129 09:20:30.130582       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I1129 09:20:30.130619       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-htmzr" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-htmzr"
	I1129 09:20:30.131138       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1129 09:20:31.018335       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="24.226889ms"
	I1129 09:20:31.018459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.794µs"
	
	
	==> kube-proxy [000a8de26034dcdc6da38237d77f79fa914b3088e593f0bbd13e14b39b42bf00] <==
	I1129 09:20:16.555876       1 server_others.go:69] "Using iptables proxy"
	I1129 09:20:16.579548       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1129 09:20:16.643168       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:20:16.645058       1 server_others.go:152] "Using iptables Proxier"
	I1129 09:20:16.645109       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1129 09:20:16.645128       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1129 09:20:16.645164       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1129 09:20:16.645384       1 server.go:846] "Version info" version="v1.28.0"
	I1129 09:20:16.645401       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:20:16.657042       1 config.go:188] "Starting service config controller"
	I1129 09:20:16.657067       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1129 09:20:16.657128       1 config.go:97] "Starting endpoint slice config controller"
	I1129 09:20:16.657132       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1129 09:20:16.657163       1 config.go:315] "Starting node config controller"
	I1129 09:20:16.657166       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1129 09:20:16.757328       1 shared_informer.go:318] Caches are synced for node config
	I1129 09:20:16.757472       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1129 09:20:16.757514       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [7c5e9c05d20b870a1e96cdb0bbf1479f013609a2bbcde73ff5f9b106d4a35049] <==
	I1129 09:19:58.666858       1 serving.go:348] Generated self-signed cert in-memory
	W1129 09:20:00.321955       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1129 09:20:00.322235       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1129 09:20:00.322326       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1129 09:20:00.322411       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1129 09:20:00.396927       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1129 09:20:00.399854       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:20:00.419574       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1129 09:20:00.431997       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:20:00.432131       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1129 09:20:00.432227       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1129 09:20:00.482293       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1129 09:20:00.482341       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1129 09:20:01.932942       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 29 09:20:15 old-k8s-version-071895 kubelet[1545]: I1129 09:20:15.542826    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3e4bdb82-85e5-468b-80dc-0481c990f117-kube-proxy\") pod \"kube-proxy-4jxrn\" (UID: \"3e4bdb82-85e5-468b-80dc-0481c990f117\") " pod="kube-system/kube-proxy-4jxrn"
	Nov 29 09:20:15 old-k8s-version-071895 kubelet[1545]: I1129 09:20:15.542946    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d4743cee-0834-4a44-9cf7-d0228aa5b843-cni-cfg\") pod \"kindnet-58g5f\" (UID: \"d4743cee-0834-4a44-9cf7-d0228aa5b843\") " pod="kube-system/kindnet-58g5f"
	Nov 29 09:20:15 old-k8s-version-071895 kubelet[1545]: I1129 09:20:15.543093    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4743cee-0834-4a44-9cf7-d0228aa5b843-xtables-lock\") pod \"kindnet-58g5f\" (UID: \"d4743cee-0834-4a44-9cf7-d0228aa5b843\") " pod="kube-system/kindnet-58g5f"
	Nov 29 09:20:15 old-k8s-version-071895 kubelet[1545]: I1129 09:20:15.543236    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcrqh\" (UniqueName: \"kubernetes.io/projected/3e4bdb82-85e5-468b-80dc-0481c990f117-kube-api-access-zcrqh\") pod \"kube-proxy-4jxrn\" (UID: \"3e4bdb82-85e5-468b-80dc-0481c990f117\") " pod="kube-system/kube-proxy-4jxrn"
	Nov 29 09:20:15 old-k8s-version-071895 kubelet[1545]: I1129 09:20:15.543388    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4743cee-0834-4a44-9cf7-d0228aa5b843-lib-modules\") pod \"kindnet-58g5f\" (UID: \"d4743cee-0834-4a44-9cf7-d0228aa5b843\") " pod="kube-system/kindnet-58g5f"
	Nov 29 09:20:15 old-k8s-version-071895 kubelet[1545]: I1129 09:20:15.543527    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfjbl\" (UniqueName: \"kubernetes.io/projected/d4743cee-0834-4a44-9cf7-d0228aa5b843-kube-api-access-hfjbl\") pod \"kindnet-58g5f\" (UID: \"d4743cee-0834-4a44-9cf7-d0228aa5b843\") " pod="kube-system/kindnet-58g5f"
	Nov 29 09:20:16 old-k8s-version-071895 kubelet[1545]: I1129 09:20:16.904236    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4jxrn" podStartSLOduration=1.904182809 podCreationTimestamp="2025-11-29 09:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:20:16.903944744 +0000 UTC m=+14.574033893" watchObservedRunningTime="2025-11-29 09:20:16.904182809 +0000 UTC m=+14.574271949"
	Nov 29 09:20:22 old-k8s-version-071895 kubelet[1545]: I1129 09:20:22.690149    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-58g5f" podStartSLOduration=5.068996977 podCreationTimestamp="2025-11-29 09:20:15 +0000 UTC" firstStartedPulling="2025-11-29 09:20:16.131889821 +0000 UTC m=+13.801978953" lastFinishedPulling="2025-11-29 09:20:18.75299704 +0000 UTC m=+16.423086171" observedRunningTime="2025-11-29 09:20:19.919717563 +0000 UTC m=+17.589806703" watchObservedRunningTime="2025-11-29 09:20:22.690104195 +0000 UTC m=+20.360193335"
	Nov 29 09:20:29 old-k8s-version-071895 kubelet[1545]: I1129 09:20:29.372571    1545 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 29 09:20:29 old-k8s-version-071895 kubelet[1545]: I1129 09:20:29.424392    1545 topology_manager.go:215] "Topology Admit Handler" podUID="784fe707-ae15-4eae-a70c-ec084ce3d812" podNamespace="kube-system" podName="storage-provisioner"
	Nov 29 09:20:29 old-k8s-version-071895 kubelet[1545]: I1129 09:20:29.431465    1545 topology_manager.go:215] "Topology Admit Handler" podUID="c6b5f2ee-df4f-40a3-be2e-6f16e858e497" podNamespace="kube-system" podName="coredns-5dd5756b68-htmzr"
	Nov 29 09:20:29 old-k8s-version-071895 kubelet[1545]: I1129 09:20:29.459512    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/784fe707-ae15-4eae-a70c-ec084ce3d812-tmp\") pod \"storage-provisioner\" (UID: \"784fe707-ae15-4eae-a70c-ec084ce3d812\") " pod="kube-system/storage-provisioner"
	Nov 29 09:20:29 old-k8s-version-071895 kubelet[1545]: I1129 09:20:29.459744    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzcr9\" (UniqueName: \"kubernetes.io/projected/784fe707-ae15-4eae-a70c-ec084ce3d812-kube-api-access-hzcr9\") pod \"storage-provisioner\" (UID: \"784fe707-ae15-4eae-a70c-ec084ce3d812\") " pod="kube-system/storage-provisioner"
	Nov 29 09:20:29 old-k8s-version-071895 kubelet[1545]: I1129 09:20:29.459885    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch9tz\" (UniqueName: \"kubernetes.io/projected/c6b5f2ee-df4f-40a3-be2e-6f16e858e497-kube-api-access-ch9tz\") pod \"coredns-5dd5756b68-htmzr\" (UID: \"c6b5f2ee-df4f-40a3-be2e-6f16e858e497\") " pod="kube-system/coredns-5dd5756b68-htmzr"
	Nov 29 09:20:29 old-k8s-version-071895 kubelet[1545]: I1129 09:20:29.460022    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6b5f2ee-df4f-40a3-be2e-6f16e858e497-config-volume\") pod \"coredns-5dd5756b68-htmzr\" (UID: \"c6b5f2ee-df4f-40a3-be2e-6f16e858e497\") " pod="kube-system/coredns-5dd5756b68-htmzr"
	Nov 29 09:20:30 old-k8s-version-071895 kubelet[1545]: I1129 09:20:30.997910    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.997856203 podCreationTimestamp="2025-11-29 09:20:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:20:30.970917027 +0000 UTC m=+28.641006167" watchObservedRunningTime="2025-11-29 09:20:30.997856203 +0000 UTC m=+28.667945343"
	Nov 29 09:20:33 old-k8s-version-071895 kubelet[1545]: I1129 09:20:33.708750    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-htmzr" podStartSLOduration=18.708653504 podCreationTimestamp="2025-11-29 09:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:20:30.99830195 +0000 UTC m=+28.668391090" watchObservedRunningTime="2025-11-29 09:20:33.708653504 +0000 UTC m=+31.378742653"
	Nov 29 09:20:33 old-k8s-version-071895 kubelet[1545]: I1129 09:20:33.709581    1545 topology_manager.go:215] "Topology Admit Handler" podUID="3abcbd08-d7c4-4a13-b94c-6f6424975411" podNamespace="default" podName="busybox"
	Nov 29 09:20:33 old-k8s-version-071895 kubelet[1545]: W1129 09:20:33.759772    1545 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-071895" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-071895' and this object
	Nov 29 09:20:33 old-k8s-version-071895 kubelet[1545]: E1129 09:20:33.759821    1545 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-071895" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-071895' and this object
	Nov 29 09:20:33 old-k8s-version-071895 kubelet[1545]: I1129 09:20:33.794129    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w6jg\" (UniqueName: \"kubernetes.io/projected/3abcbd08-d7c4-4a13-b94c-6f6424975411-kube-api-access-7w6jg\") pod \"busybox\" (UID: \"3abcbd08-d7c4-4a13-b94c-6f6424975411\") " pod="default/busybox"
	Nov 29 09:20:34 old-k8s-version-071895 kubelet[1545]: E1129 09:20:34.906850    1545 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 29 09:20:34 old-k8s-version-071895 kubelet[1545]: E1129 09:20:34.908357    1545 projected.go:198] Error preparing data for projected volume kube-api-access-7w6jg for pod default/busybox: failed to sync configmap cache: timed out waiting for the condition
	Nov 29 09:20:34 old-k8s-version-071895 kubelet[1545]: E1129 09:20:34.908523    1545 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3abcbd08-d7c4-4a13-b94c-6f6424975411-kube-api-access-7w6jg podName:3abcbd08-d7c4-4a13-b94c-6f6424975411 nodeName:}" failed. No retries permitted until 2025-11-29 09:20:35.408496185 +0000 UTC m=+33.078585316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7w6jg" (UniqueName: "kubernetes.io/projected/3abcbd08-d7c4-4a13-b94c-6f6424975411-kube-api-access-7w6jg") pod "busybox" (UID: "3abcbd08-d7c4-4a13-b94c-6f6424975411") : failed to sync configmap cache: timed out waiting for the condition
	Nov 29 09:20:37 old-k8s-version-071895 kubelet[1545]: I1129 09:20:37.992486    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.8817945099999998 podCreationTimestamp="2025-11-29 09:20:33 +0000 UTC" firstStartedPulling="2025-11-29 09:20:35.706009491 +0000 UTC m=+33.376098623" lastFinishedPulling="2025-11-29 09:20:37.816649729 +0000 UTC m=+35.486738869" observedRunningTime="2025-11-29 09:20:37.991952135 +0000 UTC m=+35.662041292" watchObservedRunningTime="2025-11-29 09:20:37.992434756 +0000 UTC m=+35.662523896"
	
	
	==> storage-provisioner [359d9432ef4979d387512d5a2a5a3cd9fb7a0987f4a3540a23407b70f7faf163] <==
	I1129 09:20:30.214942       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:20:30.235967       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:20:30.236210       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1129 09:20:30.252227       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:20:30.255628       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-071895_105725d4-e591-4aa3-af10-2659a9fed2c2!
	I1129 09:20:30.273258       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d8dbb900-fced-4c3d-a6ea-15b88c536670", APIVersion:"v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-071895_105725d4-e591-4aa3-af10-2659a9fed2c2 became leader
	I1129 09:20:30.355956       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-071895_105725d4-e591-4aa3-af10-2659a9fed2c2!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-071895 -n old-k8s-version-071895
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-071895 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (18.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-230403 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [687a18aa-1034-4892-9b86-c0ee20e62df3] Pending
helpers_test.go:352: "busybox" [687a18aa-1034-4892-9b86-c0ee20e62df3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [687a18aa-1034-4892-9b86-c0ee20e62df3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004103497s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-230403 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-230403
helpers_test.go:243: (dbg) docker inspect no-preload-230403:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c13fc280ad629d62f0c4c5fe661cdcc74414dcaec03e5fd7e2e8a0200fefcc49",
	        "Created": "2025-11-29T09:20:14.069614189Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 223201,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:20:14.140181318Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/c13fc280ad629d62f0c4c5fe661cdcc74414dcaec03e5fd7e2e8a0200fefcc49/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c13fc280ad629d62f0c4c5fe661cdcc74414dcaec03e5fd7e2e8a0200fefcc49/hostname",
	        "HostsPath": "/var/lib/docker/containers/c13fc280ad629d62f0c4c5fe661cdcc74414dcaec03e5fd7e2e8a0200fefcc49/hosts",
	        "LogPath": "/var/lib/docker/containers/c13fc280ad629d62f0c4c5fe661cdcc74414dcaec03e5fd7e2e8a0200fefcc49/c13fc280ad629d62f0c4c5fe661cdcc74414dcaec03e5fd7e2e8a0200fefcc49-json.log",
	        "Name": "/no-preload-230403",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-230403:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-230403",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c13fc280ad629d62f0c4c5fe661cdcc74414dcaec03e5fd7e2e8a0200fefcc49",
	                "LowerDir": "/var/lib/docker/overlay2/d501390884813a028d2dd42e0002041dd99bd31cc2c6dcd14de127ede5a3bb12-init/diff:/var/lib/docker/overlay2/fc2ab0019906b90b3f033fa414f560878b73f7ff0ebdf77a0b554a40813009d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d501390884813a028d2dd42e0002041dd99bd31cc2c6dcd14de127ede5a3bb12/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d501390884813a028d2dd42e0002041dd99bd31cc2c6dcd14de127ede5a3bb12/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d501390884813a028d2dd42e0002041dd99bd31cc2c6dcd14de127ede5a3bb12/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-230403",
	                "Source": "/var/lib/docker/volumes/no-preload-230403/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-230403",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-230403",
	                "name.minikube.sigs.k8s.io": "no-preload-230403",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "309d99d893613dbe91c496273c4ce1e014b087a33d4a9bf499bf2626a1e7db7f",
	            "SandboxKey": "/var/run/docker/netns/309d99d89361",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-230403": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:dc:47:0a:b5:48",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8e57cbedabd635b132c659bc736afc22b097ad423534099d6707207de613f503",
	                    "EndpointID": "9e50ae8389e18b989029e477de261d3d9acc6e6b765380ed10af568403d10d8f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-230403",
	                        "c13fc280ad62"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-230403 -n no-preload-230403
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-230403 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-230403 logs -n 25: (1.289234866s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-420729 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo containerd config dump                                                                                                                                                                                                        │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo crio config                                                                                                                                                                                                                   │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ delete  │ -p cilium-420729                                                                                                                                                                                                                                    │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ -p force-systemd-env-559836 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-559836 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:16 UTC │
	│ ssh     │ force-systemd-env-559836 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-559836 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ delete  │ -p force-systemd-env-559836                                                                                                                                                                                                                         │ force-systemd-env-559836 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ start   │ -p cert-expiration-592440 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-592440   │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ delete  │ -p running-upgrade-115889                                                                                                                                                                                                                           │ running-upgrade-115889   │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ start   │ -p cert-options-515442 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-515442      │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:19 UTC │
	│ ssh     │ cert-options-515442 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-515442      │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:19 UTC │
	│ ssh     │ -p cert-options-515442 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-515442      │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:19 UTC │
	│ delete  │ -p cert-options-515442                                                                                                                                                                                                                              │ cert-options-515442      │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:19 UTC │
	│ start   │ -p old-k8s-version-071895 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-071895   │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:20 UTC │
	│ start   │ -p cert-expiration-592440 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-592440   │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:20 UTC │
	│ delete  │ -p cert-expiration-592440                                                                                                                                                                                                                           │ cert-expiration-592440   │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:20 UTC │
	│ start   │ -p no-preload-230403 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-230403        │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:21 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-071895 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-071895   │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:20 UTC │
	│ stop    │ -p old-k8s-version-071895 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-071895   │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:21 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-071895 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-071895   │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ start   │ -p old-k8s-version-071895 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-071895   │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:21:06
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:21:06.410495  228280 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:21:06.410695  228280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:21:06.410727  228280 out.go:374] Setting ErrFile to fd 2...
	I1129 09:21:06.410751  228280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:21:06.411163  228280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
	I1129 09:21:06.411671  228280 out.go:368] Setting JSON to false
	I1129 09:21:06.412836  228280 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3817,"bootTime":1764404249,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1129 09:21:06.412977  228280 start.go:143] virtualization:  
	I1129 09:21:06.416001  228280 out.go:179] * [old-k8s-version-071895] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 09:21:06.418407  228280 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:21:06.418485  228280 notify.go:221] Checking for updates...
	I1129 09:21:06.424117  228280 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:21:06.426958  228280 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:21:06.429973  228280 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube
	I1129 09:21:06.432924  228280 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 09:21:06.435924  228280 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:21:06.439187  228280 config.go:182] Loaded profile config "old-k8s-version-071895": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:21:06.442810  228280 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1129 09:21:06.445778  228280 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:21:06.492835  228280 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 09:21:06.492955  228280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:21:06.558997  228280 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 09:21:06.548578131 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:21:06.559108  228280 docker.go:319] overlay module found
	I1129 09:21:06.564075  228280 out.go:179] * Using the docker driver based on existing profile
	I1129 09:21:06.566986  228280 start.go:309] selected driver: docker
	I1129 09:21:06.567013  228280 start.go:927] validating driver "docker" against &{Name:old-k8s-version-071895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-071895 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStr
ing: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:21:06.567121  228280 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:21:06.567824  228280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:21:06.633484  228280 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 09:21:06.623607659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:21:06.633839  228280 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:21:06.633873  228280 cni.go:84] Creating CNI manager for ""
	I1129 09:21:06.633928  228280 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:21:06.633967  228280 start.go:353] cluster config:
	{Name:old-k8s-version-071895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-071895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:21:06.637146  228280 out.go:179] * Starting "old-k8s-version-071895" primary control-plane node in "old-k8s-version-071895" cluster
	I1129 09:21:06.639888  228280 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1129 09:21:06.642862  228280 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:21:06.646320  228280 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1129 09:21:06.646396  228280 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:21:06.646635  228280 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1129 09:21:06.646646  228280 cache.go:65] Caching tarball of preloaded images
	I1129 09:21:06.646715  228280 preload.go:238] Found /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1129 09:21:06.646723  228280 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1129 09:21:06.646949  228280 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/config.json ...
	I1129 09:21:06.667931  228280 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:21:06.667954  228280 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:21:06.667975  228280 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:21:06.668005  228280 start.go:360] acquireMachinesLock for old-k8s-version-071895: {Name:mk9c1843aef8ee4917771c9dd83cfe5ed673c322 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:21:06.668078  228280 start.go:364] duration metric: took 45.26µs to acquireMachinesLock for "old-k8s-version-071895"
	I1129 09:21:06.668100  228280 start.go:96] Skipping create...Using existing machine configuration
	I1129 09:21:06.668109  228280 fix.go:54] fixHost starting: 
	I1129 09:21:06.668364  228280 cli_runner.go:164] Run: docker container inspect old-k8s-version-071895 --format={{.State.Status}}
	I1129 09:21:06.688163  228280 fix.go:112] recreateIfNeeded on old-k8s-version-071895: state=Stopped err=<nil>
	W1129 09:21:06.688202  228280 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 09:21:03.583045  222878 addons.go:530] duration metric: took 1.509163341s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 09:21:03.676529  222878 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-230403" context rescaled to 1 replicas
	W1129 09:21:05.175503  222878 node_ready.go:57] node "no-preload-230403" has "Ready":"False" status (will retry)
	W1129 09:21:07.176694  222878 node_ready.go:57] node "no-preload-230403" has "Ready":"False" status (will retry)
	I1129 09:21:06.691493  228280 out.go:252] * Restarting existing docker container for "old-k8s-version-071895" ...
	I1129 09:21:06.691578  228280 cli_runner.go:164] Run: docker start old-k8s-version-071895
	I1129 09:21:06.972406  228280 cli_runner.go:164] Run: docker container inspect old-k8s-version-071895 --format={{.State.Status}}
	I1129 09:21:06.996938  228280 kic.go:430] container "old-k8s-version-071895" state is running.
	I1129 09:21:06.998107  228280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-071895
	I1129 09:21:07.025090  228280 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/config.json ...
	I1129 09:21:07.025366  228280 machine.go:94] provisionDockerMachine start ...
	I1129 09:21:07.025444  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:07.045842  228280 main.go:143] libmachine: Using SSH client type: native
	I1129 09:21:07.046357  228280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1129 09:21:07.046373  228280 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:21:07.048601  228280 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58034->127.0.0.1:33063: read: connection reset by peer
	I1129 09:21:10.208260  228280 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-071895
	
	I1129 09:21:10.208287  228280 ubuntu.go:182] provisioning hostname "old-k8s-version-071895"
	I1129 09:21:10.208352  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:10.228023  228280 main.go:143] libmachine: Using SSH client type: native
	I1129 09:21:10.228478  228280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1129 09:21:10.228497  228280 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-071895 && echo "old-k8s-version-071895" | sudo tee /etc/hostname
	I1129 09:21:10.394406  228280 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-071895
	
	I1129 09:21:10.394557  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:10.412852  228280 main.go:143] libmachine: Using SSH client type: native
	I1129 09:21:10.413166  228280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1129 09:21:10.413182  228280 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-071895' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-071895/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-071895' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:21:10.565253  228280 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:21:10.565284  228280 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-2317/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-2317/.minikube}
	I1129 09:21:10.565343  228280 ubuntu.go:190] setting up certificates
	I1129 09:21:10.565353  228280 provision.go:84] configureAuth start
	I1129 09:21:10.565442  228280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-071895
	I1129 09:21:10.583494  228280 provision.go:143] copyHostCerts
	I1129 09:21:10.583579  228280 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem, removing ...
	I1129 09:21:10.583600  228280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem
	I1129 09:21:10.583679  228280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem (1082 bytes)
	I1129 09:21:10.583791  228280 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem, removing ...
	I1129 09:21:10.583803  228280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem
	I1129 09:21:10.583832  228280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem (1123 bytes)
	I1129 09:21:10.583901  228280 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem, removing ...
	I1129 09:21:10.583912  228280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem
	I1129 09:21:10.583939  228280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem (1679 bytes)
	I1129 09:21:10.584008  228280 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-071895 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-071895]
	I1129 09:21:11.222882  228280 provision.go:177] copyRemoteCerts
	I1129 09:21:11.222982  228280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:21:11.223046  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:11.241252  228280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:21:11.348525  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 09:21:11.367778  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1129 09:21:11.386772  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:21:11.404498  228280 provision.go:87] duration metric: took 839.125516ms to configureAuth
	I1129 09:21:11.404524  228280 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:21:11.404819  228280 config.go:182] Loaded profile config "old-k8s-version-071895": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:21:11.404835  228280 machine.go:97] duration metric: took 4.379454619s to provisionDockerMachine
	I1129 09:21:11.404843  228280 start.go:293] postStartSetup for "old-k8s-version-071895" (driver="docker")
	I1129 09:21:11.404858  228280 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:21:11.404914  228280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:21:11.404956  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	W1129 09:21:09.674698  222878 node_ready.go:57] node "no-preload-230403" has "Ready":"False" status (will retry)
	W1129 09:21:12.174772  222878 node_ready.go:57] node "no-preload-230403" has "Ready":"False" status (will retry)
	I1129 09:21:11.423195  228280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:21:11.529586  228280 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:21:11.533831  228280 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:21:11.533863  228280 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:21:11.533875  228280 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-2317/.minikube/addons for local assets ...
	I1129 09:21:11.533935  228280 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-2317/.minikube/files for local assets ...
	I1129 09:21:11.534036  228280 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem -> 41372.pem in /etc/ssl/certs
	I1129 09:21:11.534145  228280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:21:11.542551  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem --> /etc/ssl/certs/41372.pem (1708 bytes)
	I1129 09:21:11.561895  228280 start.go:296] duration metric: took 157.036248ms for postStartSetup
	I1129 09:21:11.561988  228280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:21:11.562045  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:11.579633  228280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:21:11.681878  228280 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:21:11.687047  228280 fix.go:56] duration metric: took 5.018931281s for fixHost
	I1129 09:21:11.687073  228280 start.go:83] releasing machines lock for "old-k8s-version-071895", held for 5.018983342s
	I1129 09:21:11.687157  228280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-071895
	I1129 09:21:11.704241  228280 ssh_runner.go:195] Run: cat /version.json
	I1129 09:21:11.704297  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:11.704492  228280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:21:11.704562  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:11.727286  228280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:21:11.734245  228280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:21:11.835008  228280 ssh_runner.go:195] Run: systemctl --version
	I1129 09:21:11.926191  228280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:21:11.930862  228280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:21:11.930931  228280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:21:11.939076  228280 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 09:21:11.939101  228280 start.go:496] detecting cgroup driver to use...
	I1129 09:21:11.939134  228280 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 09:21:11.939182  228280 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1129 09:21:11.957131  228280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1129 09:21:11.971869  228280 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:21:11.971945  228280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:21:11.987710  228280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:21:12.004024  228280 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:21:12.126126  228280 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:21:12.247513  228280 docker.go:234] disabling docker service ...
	I1129 09:21:12.247625  228280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:21:12.263457  228280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:21:12.277571  228280 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:21:12.404057  228280 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:21:12.517241  228280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:21:12.530926  228280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:21:12.546866  228280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1129 09:21:12.556657  228280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1129 09:21:12.566469  228280 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1129 09:21:12.566584  228280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1129 09:21:12.578618  228280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:21:12.588255  228280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1129 09:21:12.597871  228280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:21:12.607144  228280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:21:12.615376  228280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1129 09:21:12.625063  228280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1129 09:21:12.634874  228280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1129 09:21:12.644330  228280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:21:12.652107  228280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:21:12.660102  228280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:21:12.781142  228280 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1129 09:21:12.932754  228280 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1129 09:21:12.932865  228280 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1129 09:21:12.937243  228280 start.go:564] Will wait 60s for crictl version
	I1129 09:21:12.937348  228280 ssh_runner.go:195] Run: which crictl
	I1129 09:21:12.941539  228280 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:21:12.973098  228280 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1129 09:21:12.973218  228280 ssh_runner.go:195] Run: containerd --version
	I1129 09:21:12.993509  228280 ssh_runner.go:195] Run: containerd --version
	I1129 09:21:13.024405  228280 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1129 09:21:13.027409  228280 cli_runner.go:164] Run: docker network inspect old-k8s-version-071895 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:21:13.043452  228280 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 09:21:13.047557  228280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:21:13.057974  228280 kubeadm.go:884] updating cluster {Name:old-k8s-version-071895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-071895 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:21:13.058101  228280 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1129 09:21:13.058163  228280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:21:13.087299  228280 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:21:13.087321  228280 containerd.go:534] Images already preloaded, skipping extraction
	I1129 09:21:13.087382  228280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:21:13.113687  228280 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:21:13.113712  228280 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:21:13.113722  228280 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 containerd true true} ...
	I1129 09:21:13.113838  228280 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-071895 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-071895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:21:13.113913  228280 ssh_runner.go:195] Run: sudo crictl info
	I1129 09:21:13.140272  228280 cni.go:84] Creating CNI manager for ""
	I1129 09:21:13.140301  228280 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:21:13.140326  228280 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:21:13.140354  228280 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-071895 NodeName:old-k8s-version-071895 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:21:13.140483  228280 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-071895"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:21:13.140557  228280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1129 09:21:13.149717  228280 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:21:13.149814  228280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:21:13.157672  228280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1129 09:21:13.171306  228280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:21:13.185388  228280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1129 09:21:13.199265  228280 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:21:13.203237  228280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:21:13.214101  228280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:21:13.332118  228280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:21:13.348175  228280 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895 for IP: 192.168.76.2
	I1129 09:21:13.348246  228280 certs.go:195] generating shared ca certs ...
	I1129 09:21:13.348293  228280 certs.go:227] acquiring lock for ca certs: {Name:mke655c14945a8520f2f9de36531df923afb2bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:21:13.348496  228280 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-2317/.minikube/ca.key
	I1129 09:21:13.348592  228280 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.key
	I1129 09:21:13.348644  228280 certs.go:257] generating profile certs ...
	I1129 09:21:13.348787  228280 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/client.key
	I1129 09:21:13.348907  228280 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/apiserver.key.501f6453
	I1129 09:21:13.349002  228280 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/proxy-client.key
	I1129 09:21:13.349188  228280 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137.pem (1338 bytes)
	W1129 09:21:13.349262  228280 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137_empty.pem, impossibly tiny 0 bytes
	I1129 09:21:13.349292  228280 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:21:13.349362  228280 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem (1082 bytes)
	I1129 09:21:13.349433  228280 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:21:13.349480  228280 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem (1679 bytes)
	I1129 09:21:13.349603  228280 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem (1708 bytes)
	I1129 09:21:13.350469  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:21:13.376420  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1129 09:21:13.397277  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:21:13.421443  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:21:13.445862  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1129 09:21:13.471142  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:21:13.496107  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:21:13.521328  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:21:13.553546  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137.pem --> /usr/share/ca-certificates/4137.pem (1338 bytes)
	I1129 09:21:13.574753  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem --> /usr/share/ca-certificates/41372.pem (1708 bytes)
	I1129 09:21:13.595698  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:21:13.617559  228280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:21:13.632871  228280 ssh_runner.go:195] Run: openssl version
	I1129 09:21:13.640135  228280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4137.pem && ln -fs /usr/share/ca-certificates/4137.pem /etc/ssl/certs/4137.pem"
	I1129 09:21:13.649149  228280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4137.pem
	I1129 09:21:13.653500  228280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:36 /usr/share/ca-certificates/4137.pem
	I1129 09:21:13.653609  228280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4137.pem
	I1129 09:21:13.713432  228280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4137.pem /etc/ssl/certs/51391683.0"
	I1129 09:21:13.722822  228280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41372.pem && ln -fs /usr/share/ca-certificates/41372.pem /etc/ssl/certs/41372.pem"
	I1129 09:21:13.733400  228280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41372.pem
	I1129 09:21:13.737680  228280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:36 /usr/share/ca-certificates/41372.pem
	I1129 09:21:13.737792  228280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41372.pem
	I1129 09:21:13.780489  228280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41372.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:21:13.789820  228280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:21:13.798135  228280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:21:13.802169  228280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:21:13.802240  228280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:21:13.850151  228280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:21:13.858569  228280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:21:13.862729  228280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 09:21:13.905877  228280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 09:21:13.960878  228280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 09:21:14.005329  228280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 09:21:14.095967  228280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 09:21:14.153295  228280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 09:21:14.222693  228280 kubeadm.go:401] StartCluster: {Name:old-k8s-version-071895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-071895 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:21:14.222785  228280 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1129 09:21:14.222874  228280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:21:14.261358  228280 cri.go:89] found id: "ea08ec4514b5c17cbba723d8243367bf487a5f488d4baf7c51179fa441556160"
	I1129 09:21:14.261386  228280 cri.go:89] found id: "f8f1e6dc2605a052d9e0af268d75e52d11eef09c6da328c174daa4346e21359d"
	I1129 09:21:14.261401  228280 cri.go:89] found id: "359d9432ef4979d387512d5a2a5a3cd9fb7a0987f4a3540a23407b70f7faf163"
	I1129 09:21:14.261406  228280 cri.go:89] found id: "db1d77c6c85eaf5ebd7dc839fb54d40271ee80c34795b249a47534f35c064f1c"
	I1129 09:21:14.261409  228280 cri.go:89] found id: "000a8de26034dcdc6da38237d77f79fa914b3088e593f0bbd13e14b39b42bf00"
	I1129 09:21:14.261413  228280 cri.go:89] found id: "c6e9c9ab04ae16e634fbb9b4e1d16587356b43ecc4799412da2e56e79409870b"
	I1129 09:21:14.261416  228280 cri.go:89] found id: "41dff26eb8e679cc29a87f83f59d117073bdaeb9ac41cb8ac8ee1cb32c92511a"
	I1129 09:21:14.261448  228280 cri.go:89] found id: "d34a4ced6121deea5f0e58655a9a45e86fccdde412c9acf3d1e35ab330cd1b4b"
	I1129 09:21:14.261462  228280 cri.go:89] found id: "7c5e9c05d20b870a1e96cdb0bbf1479f013609a2bbcde73ff5f9b106d4a35049"
	I1129 09:21:14.261469  228280 cri.go:89] found id: ""
	I1129 09:21:14.261540  228280 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1129 09:21:14.301781  228280 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"72fdcef0fcf825d1763428531fe5a76f7bf57f324a3d7e86deedb167f50c3291","pid":916,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72fdcef0fcf825d1763428531fe5a76f7bf57f324a3d7e86deedb167f50c3291","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72fdcef0fcf825d1763428531fe5a76f7bf57f324a3d7e86deedb167f50c3291/rootfs","created":"2025-11-29T09:21:14.203252585Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"72fdcef0fcf825d1763428531fe5a76f7bf57f324a3d7e86deedb167f50c3291","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-071895_a96342591fc7bb3ae41b190b02d65234","io.kubernetes.cri.sandbox-mem
ory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-old-k8s-version-071895","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a96342591fc7bb3ae41b190b02d65234"},"owner":"root"},{"ociVersion":"1.2.1","id":"b5331b538c679583194dc7e0747d914383216f1d9db602d35755e363247944d6","pid":820,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b5331b538c679583194dc7e0747d914383216f1d9db602d35755e363247944d6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b5331b538c679583194dc7e0747d914383216f1d9db602d35755e363247944d6/rootfs","created":"2025-11-29T09:21:14.035519986Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"b5331b538c679583194dc7e0747d914383216f1d9db602d35755e363247944d6","io.kubernet
es.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-071895_ef3ada0e43d54ea2068060e8f13708f8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-old-k8s-version-071895","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ef3ada0e43d54ea2068060e8f13708f8"},"owner":"root"},{"ociVersion":"1.2.1","id":"fa4f421c6750997b68993e039d22e81efcd9c5b6eab09893bcdc6d6061bab49b","pid":929,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa4f421c6750997b68993e039d22e81efcd9c5b6eab09893bcdc6d6061bab49b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa4f421c6750997b68993e039d22e81efcd9c5b6eab09893bcdc6d6061bab49b/rootfs","created":"2025-11-29T09:21:14.239088381Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.s
andbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"fa4f421c6750997b68993e039d22e81efcd9c5b6eab09893bcdc6d6061bab49b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-071895_8b6ba97137797f9d8d5bef81cd980a7a","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-old-k8s-version-071895","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8b6ba97137797f9d8d5bef81cd980a7a"},"owner":"root"}]
	I1129 09:21:14.301935  228280 cri.go:126] list returned 3 containers
	I1129 09:21:14.301953  228280 cri.go:129] container: {ID:72fdcef0fcf825d1763428531fe5a76f7bf57f324a3d7e86deedb167f50c3291 Status:created}
	I1129 09:21:14.301981  228280 cri.go:131] skipping 72fdcef0fcf825d1763428531fe5a76f7bf57f324a3d7e86deedb167f50c3291 - not in ps
	I1129 09:21:14.301995  228280 cri.go:129] container: {ID:b5331b538c679583194dc7e0747d914383216f1d9db602d35755e363247944d6 Status:running}
	I1129 09:21:14.302002  228280 cri.go:131] skipping b5331b538c679583194dc7e0747d914383216f1d9db602d35755e363247944d6 - not in ps
	I1129 09:21:14.302007  228280 cri.go:129] container: {ID:fa4f421c6750997b68993e039d22e81efcd9c5b6eab09893bcdc6d6061bab49b Status:created}
	I1129 09:21:14.302016  228280 cri.go:131] skipping fa4f421c6750997b68993e039d22e81efcd9c5b6eab09893bcdc6d6061bab49b - not in ps
	I1129 09:21:14.302084  228280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:21:14.326916  228280 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 09:21:14.326935  228280 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 09:21:14.327012  228280 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 09:21:14.356154  228280 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:21:14.356847  228280 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-071895" does not appear in /home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:21:14.357148  228280 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-2317/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-071895" cluster setting kubeconfig missing "old-k8s-version-071895" context setting]
	I1129 09:21:14.357658  228280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/kubeconfig: {Name:mk3c09eb9158ba85342a695b6ac4b1a5f69e1b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:21:14.359252  228280 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 09:21:14.381505  228280 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1129 09:21:14.381540  228280 kubeadm.go:602] duration metric: took 54.598987ms to restartPrimaryControlPlane
	I1129 09:21:14.381585  228280 kubeadm.go:403] duration metric: took 158.875232ms to StartCluster
	I1129 09:21:14.381605  228280 settings.go:142] acquiring lock: {Name:mk44917d1324740eeda65bf3aa312ad1561d3ed4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:21:14.381692  228280 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:21:14.382612  228280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/kubeconfig: {Name:mk3c09eb9158ba85342a695b6ac4b1a5f69e1b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:21:14.382863  228280 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:21:14.383259  228280 config.go:182] Loaded profile config "old-k8s-version-071895": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:21:14.383290  228280 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:21:14.383388  228280 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-071895"
	I1129 09:21:14.383405  228280 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-071895"
	W1129 09:21:14.383411  228280 addons.go:248] addon storage-provisioner should already be in state true
	I1129 09:21:14.383413  228280 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-071895"
	I1129 09:21:14.383434  228280 host.go:66] Checking if "old-k8s-version-071895" exists ...
	I1129 09:21:14.383434  228280 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-071895"
	I1129 09:21:14.383754  228280 cli_runner.go:164] Run: docker container inspect old-k8s-version-071895 --format={{.State.Status}}
	I1129 09:21:14.383866  228280 cli_runner.go:164] Run: docker container inspect old-k8s-version-071895 --format={{.State.Status}}
	I1129 09:21:14.384331  228280 addons.go:70] Setting metrics-server=true in profile "old-k8s-version-071895"
	I1129 09:21:14.384357  228280 addons.go:239] Setting addon metrics-server=true in "old-k8s-version-071895"
	W1129 09:21:14.384377  228280 addons.go:248] addon metrics-server should already be in state true
	I1129 09:21:14.384406  228280 host.go:66] Checking if "old-k8s-version-071895" exists ...
	I1129 09:21:14.384884  228280 cli_runner.go:164] Run: docker container inspect old-k8s-version-071895 --format={{.State.Status}}
	I1129 09:21:14.387184  228280 addons.go:70] Setting dashboard=true in profile "old-k8s-version-071895"
	I1129 09:21:14.387440  228280 addons.go:239] Setting addon dashboard=true in "old-k8s-version-071895"
	W1129 09:21:14.387461  228280 addons.go:248] addon dashboard should already be in state true
	I1129 09:21:14.387493  228280 host.go:66] Checking if "old-k8s-version-071895" exists ...
	I1129 09:21:14.387982  228280 cli_runner.go:164] Run: docker container inspect old-k8s-version-071895 --format={{.State.Status}}
	I1129 09:21:14.399445  228280 out.go:179] * Verifying Kubernetes components...
	I1129 09:21:14.405460  228280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:21:14.448731  228280 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-071895"
	W1129 09:21:14.448755  228280 addons.go:248] addon default-storageclass should already be in state true
	I1129 09:21:14.448780  228280 host.go:66] Checking if "old-k8s-version-071895" exists ...
	I1129 09:21:14.449192  228280 cli_runner.go:164] Run: docker container inspect old-k8s-version-071895 --format={{.State.Status}}
	I1129 09:21:14.462410  228280 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1129 09:21:14.465662  228280 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 09:21:14.468736  228280 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1129 09:21:14.468957  228280 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:21:14.468998  228280 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 09:21:14.470223  228280 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 09:21:14.470302  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:14.470945  228280 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1129 09:21:14.470967  228280 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1129 09:21:14.471023  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:14.474914  228280 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:21:14.474939  228280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:21:14.475005  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:14.512791  228280 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:21:14.512815  228280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:21:14.512893  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:14.545902  228280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:21:14.564885  228280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:21:14.577463  228280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:21:14.584658  228280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:21:14.798852  228280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:21:14.923287  228280 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-071895" to be "Ready" ...
	I1129 09:21:15.123215  228280 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1129 09:21:15.123241  228280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1129 09:21:15.268321  228280 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1129 09:21:15.268349  228280 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1129 09:21:15.273659  228280 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 09:21:15.273687  228280 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 09:21:15.309135  228280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:21:15.367180  228280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:21:15.371388  228280 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1129 09:21:15.371415  228280 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1129 09:21:15.414920  228280 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 09:21:15.414946  228280 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 09:21:15.470944  228280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1129 09:21:15.561828  228280 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 09:21:15.561856  228280 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 09:21:15.845360  228280 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 09:21:15.845380  228280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 09:21:15.883333  228280 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 09:21:15.883357  228280 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 09:21:15.912154  228280 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 09:21:15.912177  228280 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 09:21:15.935219  228280 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 09:21:15.935266  228280 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 09:21:15.970998  228280 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 09:21:15.971021  228280 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 09:21:16.150201  228280 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 09:21:16.150226  228280 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 09:21:16.311576  228280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1129 09:21:14.174893  222878 node_ready.go:57] node "no-preload-230403" has "Ready":"False" status (will retry)
	I1129 09:21:15.674937  222878 node_ready.go:49] node "no-preload-230403" is "Ready"
	I1129 09:21:15.674965  222878 node_ready.go:38] duration metric: took 12.503411882s for node "no-preload-230403" to be "Ready" ...
	I1129 09:21:15.674979  222878 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:21:15.675039  222878 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:21:15.692194  222878 api_server.go:72] duration metric: took 13.618713226s to wait for apiserver process to appear ...
	I1129 09:21:15.692218  222878 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:21:15.692237  222878 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:21:15.700495  222878 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1129 09:21:15.701610  222878 api_server.go:141] control plane version: v1.34.1
	I1129 09:21:15.701671  222878 api_server.go:131] duration metric: took 9.446138ms to wait for apiserver health ...
	I1129 09:21:15.701703  222878 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:21:15.710405  222878 system_pods.go:59] 8 kube-system pods found
	I1129 09:21:15.710508  222878 system_pods.go:61] "coredns-66bc5c9577-6sxgs" [8966af76-b077-4486-af59-aced26be0a08] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:21:15.710536  222878 system_pods.go:61] "etcd-no-preload-230403" [06b0c9a5-89aa-4112-b1c3-a7e9a015aebd] Running
	I1129 09:21:15.710564  222878 system_pods.go:61] "kindnet-9vm4c" [1aa125e0-c584-41e4-8b34-60b0e868cd6a] Running
	I1129 09:21:15.710597  222878 system_pods.go:61] "kube-apiserver-no-preload-230403" [7c846e37-1b9e-46c5-83de-1f89a235429f] Running
	I1129 09:21:15.710621  222878 system_pods.go:61] "kube-controller-manager-no-preload-230403" [575a09ff-7c65-41f4-a394-39fede64fc46] Running
	I1129 09:21:15.710640  222878 system_pods.go:61] "kube-proxy-dk26g" [49e4de55-0854-4676-bee9-e107a3b5fae6] Running
	I1129 09:21:15.710677  222878 system_pods.go:61] "kube-scheduler-no-preload-230403" [6ea14dce-1037-4a73-b15a-3a88d98ae0c1] Running
	I1129 09:21:15.710712  222878 system_pods.go:61] "storage-provisioner" [bcd1577c-a3b1-415a-b6fe-ddc56dd52128] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:21:15.710736  222878 system_pods.go:74] duration metric: took 9.013872ms to wait for pod list to return data ...
	I1129 09:21:15.710761  222878 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:21:15.714856  222878 default_sa.go:45] found service account: "default"
	I1129 09:21:15.714882  222878 default_sa.go:55] duration metric: took 4.101301ms for default service account to be created ...
	I1129 09:21:15.714893  222878 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:21:15.718966  222878 system_pods.go:86] 8 kube-system pods found
	I1129 09:21:15.719052  222878 system_pods.go:89] "coredns-66bc5c9577-6sxgs" [8966af76-b077-4486-af59-aced26be0a08] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:21:15.719074  222878 system_pods.go:89] "etcd-no-preload-230403" [06b0c9a5-89aa-4112-b1c3-a7e9a015aebd] Running
	I1129 09:21:15.719111  222878 system_pods.go:89] "kindnet-9vm4c" [1aa125e0-c584-41e4-8b34-60b0e868cd6a] Running
	I1129 09:21:15.719136  222878 system_pods.go:89] "kube-apiserver-no-preload-230403" [7c846e37-1b9e-46c5-83de-1f89a235429f] Running
	I1129 09:21:15.719157  222878 system_pods.go:89] "kube-controller-manager-no-preload-230403" [575a09ff-7c65-41f4-a394-39fede64fc46] Running
	I1129 09:21:15.719179  222878 system_pods.go:89] "kube-proxy-dk26g" [49e4de55-0854-4676-bee9-e107a3b5fae6] Running
	I1129 09:21:15.719213  222878 system_pods.go:89] "kube-scheduler-no-preload-230403" [6ea14dce-1037-4a73-b15a-3a88d98ae0c1] Running
	I1129 09:21:15.719238  222878 system_pods.go:89] "storage-provisioner" [bcd1577c-a3b1-415a-b6fe-ddc56dd52128] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:21:15.719286  222878 retry.go:31] will retry after 300.87979ms: missing components: kube-dns
	I1129 09:21:16.024949  222878 system_pods.go:86] 8 kube-system pods found
	I1129 09:21:16.025038  222878 system_pods.go:89] "coredns-66bc5c9577-6sxgs" [8966af76-b077-4486-af59-aced26be0a08] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:21:16.025062  222878 system_pods.go:89] "etcd-no-preload-230403" [06b0c9a5-89aa-4112-b1c3-a7e9a015aebd] Running
	I1129 09:21:16.025105  222878 system_pods.go:89] "kindnet-9vm4c" [1aa125e0-c584-41e4-8b34-60b0e868cd6a] Running
	I1129 09:21:16.025146  222878 system_pods.go:89] "kube-apiserver-no-preload-230403" [7c846e37-1b9e-46c5-83de-1f89a235429f] Running
	I1129 09:21:16.025175  222878 system_pods.go:89] "kube-controller-manager-no-preload-230403" [575a09ff-7c65-41f4-a394-39fede64fc46] Running
	I1129 09:21:16.025196  222878 system_pods.go:89] "kube-proxy-dk26g" [49e4de55-0854-4676-bee9-e107a3b5fae6] Running
	I1129 09:21:16.025226  222878 system_pods.go:89] "kube-scheduler-no-preload-230403" [6ea14dce-1037-4a73-b15a-3a88d98ae0c1] Running
	I1129 09:21:16.025255  222878 system_pods.go:89] "storage-provisioner" [bcd1577c-a3b1-415a-b6fe-ddc56dd52128] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:21:16.025288  222878 retry.go:31] will retry after 370.333858ms: missing components: kube-dns
	I1129 09:21:16.401237  222878 system_pods.go:86] 8 kube-system pods found
	I1129 09:21:16.401318  222878 system_pods.go:89] "coredns-66bc5c9577-6sxgs" [8966af76-b077-4486-af59-aced26be0a08] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:21:16.401341  222878 system_pods.go:89] "etcd-no-preload-230403" [06b0c9a5-89aa-4112-b1c3-a7e9a015aebd] Running
	I1129 09:21:16.401382  222878 system_pods.go:89] "kindnet-9vm4c" [1aa125e0-c584-41e4-8b34-60b0e868cd6a] Running
	I1129 09:21:16.401409  222878 system_pods.go:89] "kube-apiserver-no-preload-230403" [7c846e37-1b9e-46c5-83de-1f89a235429f] Running
	I1129 09:21:16.401433  222878 system_pods.go:89] "kube-controller-manager-no-preload-230403" [575a09ff-7c65-41f4-a394-39fede64fc46] Running
	I1129 09:21:16.401452  222878 system_pods.go:89] "kube-proxy-dk26g" [49e4de55-0854-4676-bee9-e107a3b5fae6] Running
	I1129 09:21:16.401483  222878 system_pods.go:89] "kube-scheduler-no-preload-230403" [6ea14dce-1037-4a73-b15a-3a88d98ae0c1] Running
	I1129 09:21:16.401511  222878 system_pods.go:89] "storage-provisioner" [bcd1577c-a3b1-415a-b6fe-ddc56dd52128] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:21:16.401541  222878 retry.go:31] will retry after 454.806267ms: missing components: kube-dns
	I1129 09:21:16.860495  222878 system_pods.go:86] 8 kube-system pods found
	I1129 09:21:16.860582  222878 system_pods.go:89] "coredns-66bc5c9577-6sxgs" [8966af76-b077-4486-af59-aced26be0a08] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:21:16.860606  222878 system_pods.go:89] "etcd-no-preload-230403" [06b0c9a5-89aa-4112-b1c3-a7e9a015aebd] Running
	I1129 09:21:16.860648  222878 system_pods.go:89] "kindnet-9vm4c" [1aa125e0-c584-41e4-8b34-60b0e868cd6a] Running
	I1129 09:21:16.860677  222878 system_pods.go:89] "kube-apiserver-no-preload-230403" [7c846e37-1b9e-46c5-83de-1f89a235429f] Running
	I1129 09:21:16.860702  222878 system_pods.go:89] "kube-controller-manager-no-preload-230403" [575a09ff-7c65-41f4-a394-39fede64fc46] Running
	I1129 09:21:16.860724  222878 system_pods.go:89] "kube-proxy-dk26g" [49e4de55-0854-4676-bee9-e107a3b5fae6] Running
	I1129 09:21:16.860758  222878 system_pods.go:89] "kube-scheduler-no-preload-230403" [6ea14dce-1037-4a73-b15a-3a88d98ae0c1] Running
	I1129 09:21:16.860784  222878 system_pods.go:89] "storage-provisioner" [bcd1577c-a3b1-415a-b6fe-ddc56dd52128] Running
	I1129 09:21:16.860809  222878 system_pods.go:126] duration metric: took 1.145909329s to wait for k8s-apps to be running ...
	I1129 09:21:16.860831  222878 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:21:16.860919  222878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:21:16.884526  222878 system_svc.go:56] duration metric: took 23.687382ms WaitForService to wait for kubelet
	I1129 09:21:16.884595  222878 kubeadm.go:587] duration metric: took 14.811118806s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:21:16.884701  222878 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:21:16.893725  222878 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 09:21:16.893808  222878 node_conditions.go:123] node cpu capacity is 2
	I1129 09:21:16.893837  222878 node_conditions.go:105] duration metric: took 9.111326ms to run NodePressure ...
	I1129 09:21:16.893880  222878 start.go:242] waiting for startup goroutines ...
	I1129 09:21:16.893906  222878 start.go:247] waiting for cluster config update ...
	I1129 09:21:16.893932  222878 start.go:256] writing updated cluster config ...
	I1129 09:21:16.894255  222878 ssh_runner.go:195] Run: rm -f paused
	I1129 09:21:16.902798  222878 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:21:16.906667  222878 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6sxgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:17.914278  222878 pod_ready.go:94] pod "coredns-66bc5c9577-6sxgs" is "Ready"
	I1129 09:21:17.914336  222878 pod_ready.go:86] duration metric: took 1.007600556s for pod "coredns-66bc5c9577-6sxgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:17.920245  222878 pod_ready.go:83] waiting for pod "etcd-no-preload-230403" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:17.929670  222878 pod_ready.go:94] pod "etcd-no-preload-230403" is "Ready"
	I1129 09:21:17.929693  222878 pod_ready.go:86] duration metric: took 9.427397ms for pod "etcd-no-preload-230403" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:17.932241  222878 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-230403" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:17.941780  222878 pod_ready.go:94] pod "kube-apiserver-no-preload-230403" is "Ready"
	I1129 09:21:17.941862  222878 pod_ready.go:86] duration metric: took 9.600141ms for pod "kube-apiserver-no-preload-230403" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:17.944516  222878 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-230403" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:18.111607  222878 pod_ready.go:94] pod "kube-controller-manager-no-preload-230403" is "Ready"
	I1129 09:21:18.111682  222878 pod_ready.go:86] duration metric: took 167.098904ms for pod "kube-controller-manager-no-preload-230403" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:18.310592  222878 pod_ready.go:83] waiting for pod "kube-proxy-dk26g" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:18.710865  222878 pod_ready.go:94] pod "kube-proxy-dk26g" is "Ready"
	I1129 09:21:18.710888  222878 pod_ready.go:86] duration metric: took 400.2757ms for pod "kube-proxy-dk26g" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:18.911663  222878 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-230403" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:19.311067  222878 pod_ready.go:94] pod "kube-scheduler-no-preload-230403" is "Ready"
	I1129 09:21:19.311091  222878 pod_ready.go:86] duration metric: took 399.404368ms for pod "kube-scheduler-no-preload-230403" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:19.311105  222878 pod_ready.go:40] duration metric: took 2.408258126s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:21:19.384536  222878 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 09:21:19.387928  222878 out.go:179] * Done! kubectl is now configured to use "no-preload-230403" cluster and "default" namespace by default
	I1129 09:21:20.420240  228280 node_ready.go:49] node "old-k8s-version-071895" is "Ready"
	I1129 09:21:20.420273  228280 node_ready.go:38] duration metric: took 5.496885787s for node "old-k8s-version-071895" to be "Ready" ...
	I1129 09:21:20.420288  228280 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:21:20.420349  228280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:21:23.273825  228280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.964648021s)
	I1129 09:21:23.273903  228280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.906700567s)
	I1129 09:21:23.311554  228280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.840569332s)
	I1129 09:21:23.311587  228280 addons.go:495] Verifying addon metrics-server=true in "old-k8s-version-071895"
	I1129 09:21:23.832919  228280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.521287713s)
	I1129 09:21:23.833085  228280 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.412717729s)
	I1129 09:21:23.833108  228280 api_server.go:72] duration metric: took 9.4502133s to wait for apiserver process to appear ...
	I1129 09:21:23.833115  228280 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:21:23.833137  228280 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:21:23.835874  228280 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-071895 addons enable metrics-server
	
	I1129 09:21:23.838864  228280 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1129 09:21:23.841692  228280 addons.go:530] duration metric: took 9.45840334s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1129 09:21:23.849959  228280 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 09:21:23.851407  228280 api_server.go:141] control plane version: v1.28.0
	I1129 09:21:23.851431  228280 api_server.go:131] duration metric: took 18.306187ms to wait for apiserver health ...
	I1129 09:21:23.851440  228280 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:21:23.859613  228280 system_pods.go:59] 9 kube-system pods found
	I1129 09:21:23.859712  228280 system_pods.go:61] "coredns-5dd5756b68-htmzr" [c6b5f2ee-df4f-40a3-be2e-6f16e858e497] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:21:23.859736  228280 system_pods.go:61] "etcd-old-k8s-version-071895" [79f6e3b1-4d0e-480f-ba81-e9c28edc83ac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:21:23.859786  228280 system_pods.go:61] "kindnet-58g5f" [d4743cee-0834-4a44-9cf7-d0228aa5b843] Running
	I1129 09:21:23.859814  228280 system_pods.go:61] "kube-apiserver-old-k8s-version-071895" [81748b80-7ec0-4a82-b646-673534a05137] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:21:23.859854  228280 system_pods.go:61] "kube-controller-manager-old-k8s-version-071895" [b6691622-dfbd-4b77-bedd-c7a97120a360] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:21:23.859878  228280 system_pods.go:61] "kube-proxy-4jxrn" [3e4bdb82-85e5-468b-80dc-0481c990f117] Running
	I1129 09:21:23.859903  228280 system_pods.go:61] "kube-scheduler-old-k8s-version-071895" [fe7f98e1-0743-41d8-869a-4807c081f621] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:21:23.859941  228280 system_pods.go:61] "metrics-server-57f55c9bc5-mfbx8" [a63508cb-d063-4356-aada-0caa5d3c29f4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 09:21:23.859965  228280 system_pods.go:61] "storage-provisioner" [784fe707-ae15-4eae-a70c-ec084ce3d812] Running
	I1129 09:21:23.859987  228280 system_pods.go:74] duration metric: took 8.540162ms to wait for pod list to return data ...
	I1129 09:21:23.860029  228280 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:21:23.863876  228280 default_sa.go:45] found service account: "default"
	I1129 09:21:23.863946  228280 default_sa.go:55] duration metric: took 3.89243ms for default service account to be created ...
	I1129 09:21:23.863970  228280 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:21:23.867941  228280 system_pods.go:86] 9 kube-system pods found
	I1129 09:21:23.868025  228280 system_pods.go:89] "coredns-5dd5756b68-htmzr" [c6b5f2ee-df4f-40a3-be2e-6f16e858e497] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:21:23.868051  228280 system_pods.go:89] "etcd-old-k8s-version-071895" [79f6e3b1-4d0e-480f-ba81-e9c28edc83ac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:21:23.868091  228280 system_pods.go:89] "kindnet-58g5f" [d4743cee-0834-4a44-9cf7-d0228aa5b843] Running
	I1129 09:21:23.868124  228280 system_pods.go:89] "kube-apiserver-old-k8s-version-071895" [81748b80-7ec0-4a82-b646-673534a05137] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:21:23.868148  228280 system_pods.go:89] "kube-controller-manager-old-k8s-version-071895" [b6691622-dfbd-4b77-bedd-c7a97120a360] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:21:23.868183  228280 system_pods.go:89] "kube-proxy-4jxrn" [3e4bdb82-85e5-468b-80dc-0481c990f117] Running
	I1129 09:21:23.868210  228280 system_pods.go:89] "kube-scheduler-old-k8s-version-071895" [fe7f98e1-0743-41d8-869a-4807c081f621] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:21:23.868237  228280 system_pods.go:89] "metrics-server-57f55c9bc5-mfbx8" [a63508cb-d063-4356-aada-0caa5d3c29f4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 09:21:23.868276  228280 system_pods.go:89] "storage-provisioner" [784fe707-ae15-4eae-a70c-ec084ce3d812] Running
	I1129 09:21:23.868305  228280 system_pods.go:126] duration metric: took 4.315341ms to wait for k8s-apps to be running ...
	I1129 09:21:23.868330  228280 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:21:23.868419  228280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:21:23.897942  228280 system_svc.go:56] duration metric: took 29.605398ms WaitForService to wait for kubelet
	I1129 09:21:23.898018  228280 kubeadm.go:587] duration metric: took 9.515121994s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:21:23.898055  228280 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:21:23.901199  228280 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 09:21:23.901293  228280 node_conditions.go:123] node cpu capacity is 2
	I1129 09:21:23.901336  228280 node_conditions.go:105] duration metric: took 3.260828ms to run NodePressure ...
	I1129 09:21:23.901381  228280 start.go:242] waiting for startup goroutines ...
	I1129 09:21:23.901407  228280 start.go:247] waiting for cluster config update ...
	I1129 09:21:23.901438  228280 start.go:256] writing updated cluster config ...
	I1129 09:21:23.901819  228280 ssh_runner.go:195] Run: rm -f paused
	I1129 09:21:23.906702  228280 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:21:23.911977  228280 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-htmzr" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 09:21:25.918419  228280 pod_ready.go:104] pod "coredns-5dd5756b68-htmzr" is not "Ready", error: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	08c9ce666df15       1611cd07b61d5       7 seconds ago       Running             busybox                   0                   452518fcef024       busybox                                     default
	05a4b0a308f52       138784d87c9c5       13 seconds ago      Running             coredns                   0                   9bc59d30b25c6       coredns-66bc5c9577-6sxgs                    kube-system
	f4a2aa0118a93       66749159455b3       13 seconds ago      Running             storage-provisioner       0                   4fc3bcb6b693a       storage-provisioner                         kube-system
	3992d8d87604b       b1a8c6f707935       24 seconds ago      Running             kindnet-cni               0                   5e599f3246de3       kindnet-9vm4c                               kube-system
	15db02d9b0c38       05baa95f5142d       26 seconds ago      Running             kube-proxy                0                   170ba5c7b589b       kube-proxy-dk26g                            kube-system
	529015092ff84       a1894772a478e       43 seconds ago      Running             etcd                      0                   f344e0133b6e2       etcd-no-preload-230403                      kube-system
	442ee1b81cec3       43911e833d64d       43 seconds ago      Running             kube-apiserver            0                   10242e0068737       kube-apiserver-no-preload-230403            kube-system
	ccc47fd0affc1       b5f57ec6b9867       43 seconds ago      Running             kube-scheduler            0                   f8601b21e1d7e       kube-scheduler-no-preload-230403            kube-system
	31108b7632b14       7eb2c6ff0c5a7       43 seconds ago      Running             kube-controller-manager   0                   b6fb24324860b       kube-controller-manager-no-preload-230403   kube-system
	
	
	==> containerd <==
	Nov 29 09:21:16 no-preload-230403 containerd[757]: time="2025-11-29T09:21:16.220052767Z" level=info msg="CreateContainer within sandbox \"4fc3bcb6b693a563066d949dd7c4dcd71c3142a1d5504e91c36acb42981db35c\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"f4a2aa0118a93c01aee62976d9a1b1d28e43e4822b829e507a3bf3a58b4f5243\""
	Nov 29 09:21:16 no-preload-230403 containerd[757]: time="2025-11-29T09:21:16.224516868Z" level=info msg="StartContainer for \"f4a2aa0118a93c01aee62976d9a1b1d28e43e4822b829e507a3bf3a58b4f5243\""
	Nov 29 09:21:16 no-preload-230403 containerd[757]: time="2025-11-29T09:21:16.227535808Z" level=info msg="connecting to shim f4a2aa0118a93c01aee62976d9a1b1d28e43e4822b829e507a3bf3a58b4f5243" address="unix:///run/containerd/s/eafa56d42b15a5cf94a38b30c23a63af09ffac47ce47a6a134da11cc55fb5bd2" protocol=ttrpc version=3
	Nov 29 09:21:16 no-preload-230403 containerd[757]: time="2025-11-29T09:21:16.236891368Z" level=info msg="Container 05a4b0a308f52d4430467a7c8a19d4c9f59139df7163eae781089f10995b9e43: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:21:16 no-preload-230403 containerd[757]: time="2025-11-29T09:21:16.270027037Z" level=info msg="CreateContainer within sandbox \"9bc59d30b25c6193275626374070a2fc66d9e237103ce720a16bb2be86d337a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"05a4b0a308f52d4430467a7c8a19d4c9f59139df7163eae781089f10995b9e43\""
	Nov 29 09:21:16 no-preload-230403 containerd[757]: time="2025-11-29T09:21:16.276320556Z" level=info msg="StartContainer for \"05a4b0a308f52d4430467a7c8a19d4c9f59139df7163eae781089f10995b9e43\""
	Nov 29 09:21:16 no-preload-230403 containerd[757]: time="2025-11-29T09:21:16.277331048Z" level=info msg="connecting to shim 05a4b0a308f52d4430467a7c8a19d4c9f59139df7163eae781089f10995b9e43" address="unix:///run/containerd/s/4150c600906c8fc2ff8fd225298ba8f9b7d7a162a4870c27a853e5d03d0ed27a" protocol=ttrpc version=3
	Nov 29 09:21:16 no-preload-230403 containerd[757]: time="2025-11-29T09:21:16.411382064Z" level=info msg="StartContainer for \"f4a2aa0118a93c01aee62976d9a1b1d28e43e4822b829e507a3bf3a58b4f5243\" returns successfully"
	Nov 29 09:21:16 no-preload-230403 containerd[757]: time="2025-11-29T09:21:16.474267681Z" level=info msg="StartContainer for \"05a4b0a308f52d4430467a7c8a19d4c9f59139df7163eae781089f10995b9e43\" returns successfully"
	Nov 29 09:21:20 no-preload-230403 containerd[757]: time="2025-11-29T09:21:20.020427571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:687a18aa-1034-4892-9b86-c0ee20e62df3,Namespace:default,Attempt:0,}"
	Nov 29 09:21:20 no-preload-230403 containerd[757]: time="2025-11-29T09:21:20.098047678Z" level=info msg="connecting to shim 452518fcef02414ebd4d2194884e69aed409ab705db76708019b862cd80f24a4" address="unix:///run/containerd/s/c4310d71ae357f5f49d830cc9014c370440bbf503c78a07ec64e6309b208eb9f" namespace=k8s.io protocol=ttrpc version=3
	Nov 29 09:21:20 no-preload-230403 containerd[757]: time="2025-11-29T09:21:20.226571275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:687a18aa-1034-4892-9b86-c0ee20e62df3,Namespace:default,Attempt:0,} returns sandbox id \"452518fcef02414ebd4d2194884e69aed409ab705db76708019b862cd80f24a4\""
	Nov 29 09:21:20 no-preload-230403 containerd[757]: time="2025-11-29T09:21:20.236151969Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.370559585Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.372666224Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937184"
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.375511525Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.378699598Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.379679936Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.14330372s"
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.379807387Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.390961047Z" level=info msg="CreateContainer within sandbox \"452518fcef02414ebd4d2194884e69aed409ab705db76708019b862cd80f24a4\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.416856811Z" level=info msg="Container 08c9ce666df156d70c72e2b14ec76fdf287b74c34f9ff9b0d13e6b44906d13a8: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.428490260Z" level=info msg="CreateContainer within sandbox \"452518fcef02414ebd4d2194884e69aed409ab705db76708019b862cd80f24a4\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"08c9ce666df156d70c72e2b14ec76fdf287b74c34f9ff9b0d13e6b44906d13a8\""
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.430431441Z" level=info msg="StartContainer for \"08c9ce666df156d70c72e2b14ec76fdf287b74c34f9ff9b0d13e6b44906d13a8\""
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.432041264Z" level=info msg="connecting to shim 08c9ce666df156d70c72e2b14ec76fdf287b74c34f9ff9b0d13e6b44906d13a8" address="unix:///run/containerd/s/c4310d71ae357f5f49d830cc9014c370440bbf503c78a07ec64e6309b208eb9f" protocol=ttrpc version=3
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.551057718Z" level=info msg="StartContainer for \"08c9ce666df156d70c72e2b14ec76fdf287b74c34f9ff9b0d13e6b44906d13a8\" returns successfully"
	
	
	==> coredns [05a4b0a308f52d4430467a7c8a19d4c9f59139df7163eae781089f10995b9e43] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39228 - 31855 "HINFO IN 688778656227592227.3077470296714571997. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006813432s
	
	
	==> describe nodes <==
	Name:               no-preload-230403
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-230403
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=no-preload-230403
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_20_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:20:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-230403
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:21:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:21:28 +0000   Sat, 29 Nov 2025 09:20:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:21:28 +0000   Sat, 29 Nov 2025 09:20:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:21:28 +0000   Sat, 29 Nov 2025 09:20:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:21:28 +0000   Sat, 29 Nov 2025 09:21:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-230403
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                bf89642e-03f0-40bb-a2b9-6ab8c2e41ff2
	  Boot ID:                    6647f078-4edd-40c5-9d0e-49eb5ed00bd7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-6sxgs                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-no-preload-230403                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-9vm4c                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-230403             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-230403    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-dk26g                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-230403             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Warning  CgroupV1                 45s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node no-preload-230403 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node no-preload-230403 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s (x7 over 45s)  kubelet          Node no-preload-230403 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  45s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  33s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node no-preload-230403 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node no-preload-230403 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s                kubelet          Node no-preload-230403 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           29s                node-controller  Node no-preload-230403 event: Registered Node no-preload-230403 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-230403 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014634] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.570975] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032231] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767655] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.282538] kauditd_printk_skb: 36 callbacks suppressed
	[Nov29 08:39] FS-Cache: Duplicate cookie detected
	[  +0.000727] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001077] FS-Cache: O-cookie d=00000000b08097f7{9P.session} n=00000000a17ba85f
	[  +0.001074] FS-Cache: O-key=[10] '34323935323231393134'
	[  +0.000776] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000954] FS-Cache: N-cookie d=00000000b08097f7{9P.session} n=00000000534469ad
	[  +0.001092] FS-Cache: N-key=[10] '34323935323231393134'
	[Nov29 09:19] hrtimer: interrupt took 12545193 ns
	
	
	==> etcd [529015092ff84e3cfc1541604b0727773d20e63ff6780bb1fe9bd43be34d1e64] <==
	{"level":"warn","ts":"2025-11-29T09:20:50.547814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.577884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.621143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.659227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.725150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.729920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.762120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.780569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.810851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.851416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.869633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.926146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.993272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.038405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.143824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.180587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.204757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.227046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.269999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.324222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.380218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.429364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.461214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.716904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33284","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-29T09:20:55.227481Z","caller":"traceutil/trace.go:172","msg":"trace[583648155] transaction","detail":"{read_only:false; response_revision:135; number_of_response:1; }","duration":"101.34ms","start":"2025-11-29T09:20:55.126123Z","end":"2025-11-29T09:20:55.227463Z","steps":["trace[583648155] 'process raft request'  (duration: 60.728921ms)","trace[583648155] 'compare'  (duration: 40.244176ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:21:30 up  1:04,  0 user,  load average: 4.53, 3.08, 2.75
	Linux no-preload-230403 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3992d8d87604b5cf88e1aa999f8c6313f0b01d8c4b61b71819a8390069e32b57] <==
	I1129 09:21:05.183956       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:21:05.184234       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1129 09:21:05.184414       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:21:05.184467       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:21:05.184513       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:21:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:21:05.478264       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:21:05.478296       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:21:05.478307       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:21:05.481366       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:21:05.678673       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:21:05.678703       1 metrics.go:72] Registering metrics
	I1129 09:21:05.678884       1 controller.go:711] "Syncing nftables rules"
	I1129 09:21:15.484696       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:21:15.484747       1 main.go:301] handling current node
	I1129 09:21:25.477875       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:21:25.478099       1 main.go:301] handling current node
	
	
	==> kube-apiserver [442ee1b81cec38adacfe7c257d16cbad914c5e2dcd38dbbbdc3ab578c74701d7] <==
	I1129 09:20:53.825260       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 09:20:53.827155       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:20:53.858026       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:20:53.858399       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 09:20:53.870029       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:20:53.889367       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:20:54.104970       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:20:54.239097       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:20:54.304759       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:20:54.310379       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:20:55.978063       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:20:56.040769       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:20:56.155159       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:20:56.163141       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1129 09:20:56.164582       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:20:56.169995       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:20:56.427962       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:20:57.342740       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:20:57.358859       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:20:57.373259       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:21:02.288215       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:21:02.293681       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:21:02.389633       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:21:02.532424       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1129 09:21:28.832367       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:40778: use of closed network connection
	
	
	==> kube-controller-manager [31108b7632b14c27139254a050f0205a4419db19b5cd47bfa0792d9bca6594b2] <==
	I1129 09:21:01.438970       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 09:21:01.438978       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 09:21:01.439299       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 09:21:01.439435       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-230403"
	I1129 09:21:01.439540       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1129 09:21:01.444191       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-230403" podCIDRs=["10.244.0.0/24"]
	I1129 09:21:01.445105       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:21:01.452345       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1129 09:21:01.459847       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1129 09:21:01.468230       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 09:21:01.475201       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 09:21:01.476399       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1129 09:21:01.476507       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 09:21:01.477693       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:21:01.478269       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 09:21:01.478336       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 09:21:01.480927       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 09:21:01.480878       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1129 09:21:01.481307       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 09:21:01.481412       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1129 09:21:01.481459       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:21:01.481479       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 09:21:01.481495       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1129 09:21:01.483667       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 09:21:16.443958       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [15db02d9b0c388d85652c6f0d2f65bdd40af2dd1368cfc1a53059f0524c5dca3] <==
	I1129 09:21:03.396901       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:21:03.501189       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:21:03.602316       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:21:03.602352       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1129 09:21:03.602476       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:21:03.622161       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:21:03.622217       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:21:03.626421       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:21:03.626917       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:21:03.626943       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:21:03.628555       1 config.go:200] "Starting service config controller"
	I1129 09:21:03.628578       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:21:03.628595       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:21:03.628599       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:21:03.628611       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:21:03.629269       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:21:03.632948       1 config.go:309] "Starting node config controller"
	I1129 09:21:03.632971       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:21:03.632979       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:21:03.729333       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 09:21:03.729422       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:21:03.729349       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ccc47fd0affc1ca4c6b1acdabcf649af9c7dc90d27aeae3fad9532c90b0ad1c6] <==
	I1129 09:20:55.217385       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:20:55.223934       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:20:55.223996       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1129 09:20:55.226902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1129 09:20:55.227330       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 09:20:55.227541       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1129 09:20:55.278924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:20:55.279083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:20:55.279140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 09:20:55.279192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:20:55.279237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:20:55.279284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:20:55.279342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 09:20:55.287173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:20:55.287249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:20:55.287300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:20:55.287349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:20:55.287539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:20:55.287593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:20:55.287633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:20:55.287770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:20:55.287817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:20:55.292211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:20:55.292354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1129 09:20:56.224177       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:20:58 no-preload-230403 kubelet[2096]: E1129 09:20:58.458508    2096 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-no-preload-230403\" already exists" pod="kube-system/kube-scheduler-no-preload-230403"
	Nov 29 09:20:58 no-preload-230403 kubelet[2096]: E1129 09:20:58.465251    2096 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-no-preload-230403\" already exists" pod="kube-system/kube-controller-manager-no-preload-230403"
	Nov 29 09:20:58 no-preload-230403 kubelet[2096]: E1129 09:20:58.469878    2096 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-no-preload-230403\" already exists" pod="kube-system/etcd-no-preload-230403"
	Nov 29 09:20:58 no-preload-230403 kubelet[2096]: E1129 09:20:58.471794    2096 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-no-preload-230403\" already exists" pod="kube-system/kube-apiserver-no-preload-230403"
	Nov 29 09:21:01 no-preload-230403 kubelet[2096]: I1129 09:21:01.452026    2096 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 29 09:21:01 no-preload-230403 kubelet[2096]: I1129 09:21:01.453570    2096 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 09:21:02 no-preload-230403 kubelet[2096]: I1129 09:21:02.652808    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1aa125e0-c584-41e4-8b34-60b0e868cd6a-lib-modules\") pod \"kindnet-9vm4c\" (UID: \"1aa125e0-c584-41e4-8b34-60b0e868cd6a\") " pod="kube-system/kindnet-9vm4c"
	Nov 29 09:21:02 no-preload-230403 kubelet[2096]: I1129 09:21:02.652854    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzxnr\" (UniqueName: \"kubernetes.io/projected/1aa125e0-c584-41e4-8b34-60b0e868cd6a-kube-api-access-jzxnr\") pod \"kindnet-9vm4c\" (UID: \"1aa125e0-c584-41e4-8b34-60b0e868cd6a\") " pod="kube-system/kindnet-9vm4c"
	Nov 29 09:21:02 no-preload-230403 kubelet[2096]: I1129 09:21:02.652880    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49e4de55-0854-4676-bee9-e107a3b5fae6-xtables-lock\") pod \"kube-proxy-dk26g\" (UID: \"49e4de55-0854-4676-bee9-e107a3b5fae6\") " pod="kube-system/kube-proxy-dk26g"
	Nov 29 09:21:02 no-preload-230403 kubelet[2096]: I1129 09:21:02.652898    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1aa125e0-c584-41e4-8b34-60b0e868cd6a-cni-cfg\") pod \"kindnet-9vm4c\" (UID: \"1aa125e0-c584-41e4-8b34-60b0e868cd6a\") " pod="kube-system/kindnet-9vm4c"
	Nov 29 09:21:02 no-preload-230403 kubelet[2096]: I1129 09:21:02.652922    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49e4de55-0854-4676-bee9-e107a3b5fae6-lib-modules\") pod \"kube-proxy-dk26g\" (UID: \"49e4de55-0854-4676-bee9-e107a3b5fae6\") " pod="kube-system/kube-proxy-dk26g"
	Nov 29 09:21:02 no-preload-230403 kubelet[2096]: I1129 09:21:02.652939    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv9pn\" (UniqueName: \"kubernetes.io/projected/49e4de55-0854-4676-bee9-e107a3b5fae6-kube-api-access-dv9pn\") pod \"kube-proxy-dk26g\" (UID: \"49e4de55-0854-4676-bee9-e107a3b5fae6\") " pod="kube-system/kube-proxy-dk26g"
	Nov 29 09:21:02 no-preload-230403 kubelet[2096]: I1129 09:21:02.652958    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/49e4de55-0854-4676-bee9-e107a3b5fae6-kube-proxy\") pod \"kube-proxy-dk26g\" (UID: \"49e4de55-0854-4676-bee9-e107a3b5fae6\") " pod="kube-system/kube-proxy-dk26g"
	Nov 29 09:21:02 no-preload-230403 kubelet[2096]: I1129 09:21:02.652976    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1aa125e0-c584-41e4-8b34-60b0e868cd6a-xtables-lock\") pod \"kindnet-9vm4c\" (UID: \"1aa125e0-c584-41e4-8b34-60b0e868cd6a\") " pod="kube-system/kindnet-9vm4c"
	Nov 29 09:21:02 no-preload-230403 kubelet[2096]: I1129 09:21:02.781508    2096 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 29 09:21:05 no-preload-230403 kubelet[2096]: I1129 09:21:05.482139    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9vm4c" podStartSLOduration=1.831790549 podStartE2EDuration="3.482113477s" podCreationTimestamp="2025-11-29 09:21:02 +0000 UTC" firstStartedPulling="2025-11-29 09:21:03.261360009 +0000 UTC m=+6.095872775" lastFinishedPulling="2025-11-29 09:21:04.911682938 +0000 UTC m=+7.746195703" observedRunningTime="2025-11-29 09:21:05.481939207 +0000 UTC m=+8.316451989" watchObservedRunningTime="2025-11-29 09:21:05.482113477 +0000 UTC m=+8.316626243"
	Nov 29 09:21:05 no-preload-230403 kubelet[2096]: I1129 09:21:05.482271    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dk26g" podStartSLOduration=3.482264272 podStartE2EDuration="3.482264272s" podCreationTimestamp="2025-11-29 09:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:21:03.495630755 +0000 UTC m=+6.330143529" watchObservedRunningTime="2025-11-29 09:21:05.482264272 +0000 UTC m=+8.316777046"
	Nov 29 09:21:15 no-preload-230403 kubelet[2096]: I1129 09:21:15.509988    2096 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 09:21:15 no-preload-230403 kubelet[2096]: I1129 09:21:15.669085    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt9mt\" (UniqueName: \"kubernetes.io/projected/bcd1577c-a3b1-415a-b6fe-ddc56dd52128-kube-api-access-pt9mt\") pod \"storage-provisioner\" (UID: \"bcd1577c-a3b1-415a-b6fe-ddc56dd52128\") " pod="kube-system/storage-provisioner"
	Nov 29 09:21:15 no-preload-230403 kubelet[2096]: I1129 09:21:15.669284    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8966af76-b077-4486-af59-aced26be0a08-config-volume\") pod \"coredns-66bc5c9577-6sxgs\" (UID: \"8966af76-b077-4486-af59-aced26be0a08\") " pod="kube-system/coredns-66bc5c9577-6sxgs"
	Nov 29 09:21:15 no-preload-230403 kubelet[2096]: I1129 09:21:15.669386    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmqb2\" (UniqueName: \"kubernetes.io/projected/8966af76-b077-4486-af59-aced26be0a08-kube-api-access-zmqb2\") pod \"coredns-66bc5c9577-6sxgs\" (UID: \"8966af76-b077-4486-af59-aced26be0a08\") " pod="kube-system/coredns-66bc5c9577-6sxgs"
	Nov 29 09:21:15 no-preload-230403 kubelet[2096]: I1129 09:21:15.669490    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bcd1577c-a3b1-415a-b6fe-ddc56dd52128-tmp\") pod \"storage-provisioner\" (UID: \"bcd1577c-a3b1-415a-b6fe-ddc56dd52128\") " pod="kube-system/storage-provisioner"
	Nov 29 09:21:16 no-preload-230403 kubelet[2096]: I1129 09:21:16.622562    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.622528802 podStartE2EDuration="13.622528802s" podCreationTimestamp="2025-11-29 09:21:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:21:16.622057931 +0000 UTC m=+19.456570697" watchObservedRunningTime="2025-11-29 09:21:16.622528802 +0000 UTC m=+19.457041576"
	Nov 29 09:21:16 no-preload-230403 kubelet[2096]: I1129 09:21:16.623150    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6sxgs" podStartSLOduration=14.623136198 podStartE2EDuration="14.623136198s" podCreationTimestamp="2025-11-29 09:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:21:16.537569737 +0000 UTC m=+19.372082502" watchObservedRunningTime="2025-11-29 09:21:16.623136198 +0000 UTC m=+19.457649038"
	Nov 29 09:21:19 no-preload-230403 kubelet[2096]: I1129 09:21:19.804471    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhqjs\" (UniqueName: \"kubernetes.io/projected/687a18aa-1034-4892-9b86-c0ee20e62df3-kube-api-access-jhqjs\") pod \"busybox\" (UID: \"687a18aa-1034-4892-9b86-c0ee20e62df3\") " pod="default/busybox"
	
	
	==> storage-provisioner [f4a2aa0118a93c01aee62976d9a1b1d28e43e4822b829e507a3bf3a58b4f5243] <==
	I1129 09:21:16.398740       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:21:16.441176       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:21:16.441252       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:21:16.455780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:16.503804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:21:16.537076       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:21:16.538445       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-230403_781b0ec1-ce49-4e99-88af-65c5e6b31216!
	I1129 09:21:16.545246       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"07e55f1f-8797-4ed2-bfa1-48e52251e527", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-230403_781b0ec1-ce49-4e99-88af-65c5e6b31216 became leader
	W1129 09:21:16.590865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:16.610733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:21:16.640272       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-230403_781b0ec1-ce49-4e99-88af-65c5e6b31216!
	W1129 09:21:18.613951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:18.622582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:20.625931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:20.637148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:22.641284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:22.649517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:24.652897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:24.660094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:26.664208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:26.669558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:28.672863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:28.680050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-230403 -n no-preload-230403
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-230403 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-230403
helpers_test.go:243: (dbg) docker inspect no-preload-230403:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c13fc280ad629d62f0c4c5fe661cdcc74414dcaec03e5fd7e2e8a0200fefcc49",
	        "Created": "2025-11-29T09:20:14.069614189Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 223201,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:20:14.140181318Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/c13fc280ad629d62f0c4c5fe661cdcc74414dcaec03e5fd7e2e8a0200fefcc49/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c13fc280ad629d62f0c4c5fe661cdcc74414dcaec03e5fd7e2e8a0200fefcc49/hostname",
	        "HostsPath": "/var/lib/docker/containers/c13fc280ad629d62f0c4c5fe661cdcc74414dcaec03e5fd7e2e8a0200fefcc49/hosts",
	        "LogPath": "/var/lib/docker/containers/c13fc280ad629d62f0c4c5fe661cdcc74414dcaec03e5fd7e2e8a0200fefcc49/c13fc280ad629d62f0c4c5fe661cdcc74414dcaec03e5fd7e2e8a0200fefcc49-json.log",
	        "Name": "/no-preload-230403",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-230403:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-230403",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c13fc280ad629d62f0c4c5fe661cdcc74414dcaec03e5fd7e2e8a0200fefcc49",
	                "LowerDir": "/var/lib/docker/overlay2/d501390884813a028d2dd42e0002041dd99bd31cc2c6dcd14de127ede5a3bb12-init/diff:/var/lib/docker/overlay2/fc2ab0019906b90b3f033fa414f560878b73f7ff0ebdf77a0b554a40813009d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d501390884813a028d2dd42e0002041dd99bd31cc2c6dcd14de127ede5a3bb12/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d501390884813a028d2dd42e0002041dd99bd31cc2c6dcd14de127ede5a3bb12/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d501390884813a028d2dd42e0002041dd99bd31cc2c6dcd14de127ede5a3bb12/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-230403",
	                "Source": "/var/lib/docker/volumes/no-preload-230403/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-230403",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-230403",
	                "name.minikube.sigs.k8s.io": "no-preload-230403",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "309d99d893613dbe91c496273c4ce1e014b087a33d4a9bf499bf2626a1e7db7f",
	            "SandboxKey": "/var/run/docker/netns/309d99d89361",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-230403": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:dc:47:0a:b5:48",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8e57cbedabd635b132c659bc736afc22b097ad423534099d6707207de613f503",
	                    "EndpointID": "9e50ae8389e18b989029e477de261d3d9acc6e6b765380ed10af568403d10d8f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-230403",
	                        "c13fc280ad62"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-230403 -n no-preload-230403
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-230403 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-230403 logs -n 25: (1.190292872s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-420729 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo containerd config dump                                                                                                                                                                                                        │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p cilium-420729 sudo crio config                                                                                                                                                                                                                   │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ delete  │ -p cilium-420729                                                                                                                                                                                                                                    │ cilium-420729            │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ -p force-systemd-env-559836 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-559836 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:16 UTC │
	│ ssh     │ force-systemd-env-559836 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-559836 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ delete  │ -p force-systemd-env-559836                                                                                                                                                                                                                         │ force-systemd-env-559836 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ start   │ -p cert-expiration-592440 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-592440   │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ delete  │ -p running-upgrade-115889                                                                                                                                                                                                                           │ running-upgrade-115889   │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ start   │ -p cert-options-515442 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-515442      │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:19 UTC │
	│ ssh     │ cert-options-515442 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-515442      │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:19 UTC │
	│ ssh     │ -p cert-options-515442 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-515442      │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:19 UTC │
	│ delete  │ -p cert-options-515442                                                                                                                                                                                                                              │ cert-options-515442      │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:19 UTC │
	│ start   │ -p old-k8s-version-071895 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-071895   │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:20 UTC │
	│ start   │ -p cert-expiration-592440 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-592440   │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:20 UTC │
	│ delete  │ -p cert-expiration-592440                                                                                                                                                                                                                           │ cert-expiration-592440   │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:20 UTC │
	│ start   │ -p no-preload-230403 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-230403        │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:21 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-071895 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-071895   │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:20 UTC │
	│ stop    │ -p old-k8s-version-071895 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-071895   │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:21 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-071895 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-071895   │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ start   │ -p old-k8s-version-071895 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-071895   │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:21:06
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:21:06.410495  228280 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:21:06.410695  228280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:21:06.410727  228280 out.go:374] Setting ErrFile to fd 2...
	I1129 09:21:06.410751  228280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:21:06.411163  228280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
	I1129 09:21:06.411671  228280 out.go:368] Setting JSON to false
	I1129 09:21:06.412836  228280 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3817,"bootTime":1764404249,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1129 09:21:06.412977  228280 start.go:143] virtualization:  
	I1129 09:21:06.416001  228280 out.go:179] * [old-k8s-version-071895] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 09:21:06.418407  228280 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:21:06.418485  228280 notify.go:221] Checking for updates...
	I1129 09:21:06.424117  228280 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:21:06.426958  228280 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:21:06.429973  228280 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube
	I1129 09:21:06.432924  228280 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 09:21:06.435924  228280 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:21:06.439187  228280 config.go:182] Loaded profile config "old-k8s-version-071895": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:21:06.442810  228280 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1129 09:21:06.445778  228280 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:21:06.492835  228280 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 09:21:06.492955  228280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:21:06.558997  228280 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 09:21:06.548578131 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:21:06.559108  228280 docker.go:319] overlay module found
	I1129 09:21:06.564075  228280 out.go:179] * Using the docker driver based on existing profile
	I1129 09:21:06.566986  228280 start.go:309] selected driver: docker
	I1129 09:21:06.567013  228280 start.go:927] validating driver "docker" against &{Name:old-k8s-version-071895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-071895 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStr
ing: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:21:06.567121  228280 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:21:06.567824  228280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:21:06.633484  228280 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 09:21:06.623607659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:21:06.633839  228280 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:21:06.633873  228280 cni.go:84] Creating CNI manager for ""
	I1129 09:21:06.633928  228280 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:21:06.633967  228280 start.go:353] cluster config:
	{Name:old-k8s-version-071895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-071895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:21:06.637146  228280 out.go:179] * Starting "old-k8s-version-071895" primary control-plane node in "old-k8s-version-071895" cluster
	I1129 09:21:06.639888  228280 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1129 09:21:06.642862  228280 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:21:06.646320  228280 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1129 09:21:06.646396  228280 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:21:06.646635  228280 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1129 09:21:06.646646  228280 cache.go:65] Caching tarball of preloaded images
	I1129 09:21:06.646715  228280 preload.go:238] Found /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1129 09:21:06.646723  228280 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1129 09:21:06.646949  228280 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/config.json ...
	I1129 09:21:06.667931  228280 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:21:06.667954  228280 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:21:06.667975  228280 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:21:06.668005  228280 start.go:360] acquireMachinesLock for old-k8s-version-071895: {Name:mk9c1843aef8ee4917771c9dd83cfe5ed673c322 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:21:06.668078  228280 start.go:364] duration metric: took 45.26µs to acquireMachinesLock for "old-k8s-version-071895"
	I1129 09:21:06.668100  228280 start.go:96] Skipping create...Using existing machine configuration
	I1129 09:21:06.668109  228280 fix.go:54] fixHost starting: 
	I1129 09:21:06.668364  228280 cli_runner.go:164] Run: docker container inspect old-k8s-version-071895 --format={{.State.Status}}
	I1129 09:21:06.688163  228280 fix.go:112] recreateIfNeeded on old-k8s-version-071895: state=Stopped err=<nil>
	W1129 09:21:06.688202  228280 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 09:21:03.583045  222878 addons.go:530] duration metric: took 1.509163341s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 09:21:03.676529  222878 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-230403" context rescaled to 1 replicas
	W1129 09:21:05.175503  222878 node_ready.go:57] node "no-preload-230403" has "Ready":"False" status (will retry)
	W1129 09:21:07.176694  222878 node_ready.go:57] node "no-preload-230403" has "Ready":"False" status (will retry)
	I1129 09:21:06.691493  228280 out.go:252] * Restarting existing docker container for "old-k8s-version-071895" ...
	I1129 09:21:06.691578  228280 cli_runner.go:164] Run: docker start old-k8s-version-071895
	I1129 09:21:06.972406  228280 cli_runner.go:164] Run: docker container inspect old-k8s-version-071895 --format={{.State.Status}}
	I1129 09:21:06.996938  228280 kic.go:430] container "old-k8s-version-071895" state is running.
	I1129 09:21:06.998107  228280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-071895
	I1129 09:21:07.025090  228280 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/config.json ...
	I1129 09:21:07.025366  228280 machine.go:94] provisionDockerMachine start ...
	I1129 09:21:07.025444  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:07.045842  228280 main.go:143] libmachine: Using SSH client type: native
	I1129 09:21:07.046357  228280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1129 09:21:07.046373  228280 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:21:07.048601  228280 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58034->127.0.0.1:33063: read: connection reset by peer
	I1129 09:21:10.208260  228280 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-071895
	
	I1129 09:21:10.208287  228280 ubuntu.go:182] provisioning hostname "old-k8s-version-071895"
	I1129 09:21:10.208352  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:10.228023  228280 main.go:143] libmachine: Using SSH client type: native
	I1129 09:21:10.228478  228280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1129 09:21:10.228497  228280 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-071895 && echo "old-k8s-version-071895" | sudo tee /etc/hostname
	I1129 09:21:10.394406  228280 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-071895
	
	I1129 09:21:10.394557  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:10.412852  228280 main.go:143] libmachine: Using SSH client type: native
	I1129 09:21:10.413166  228280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1129 09:21:10.413182  228280 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-071895' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-071895/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-071895' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:21:10.565253  228280 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:21:10.565284  228280 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-2317/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-2317/.minikube}
	I1129 09:21:10.565343  228280 ubuntu.go:190] setting up certificates
	I1129 09:21:10.565353  228280 provision.go:84] configureAuth start
	I1129 09:21:10.565442  228280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-071895
	I1129 09:21:10.583494  228280 provision.go:143] copyHostCerts
	I1129 09:21:10.583579  228280 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem, removing ...
	I1129 09:21:10.583600  228280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem
	I1129 09:21:10.583679  228280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem (1082 bytes)
	I1129 09:21:10.583791  228280 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem, removing ...
	I1129 09:21:10.583803  228280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem
	I1129 09:21:10.583832  228280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem (1123 bytes)
	I1129 09:21:10.583901  228280 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem, removing ...
	I1129 09:21:10.583912  228280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem
	I1129 09:21:10.583939  228280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem (1679 bytes)
	I1129 09:21:10.584008  228280 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-071895 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-071895]
	I1129 09:21:11.222882  228280 provision.go:177] copyRemoteCerts
	I1129 09:21:11.222982  228280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:21:11.223046  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:11.241252  228280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:21:11.348525  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 09:21:11.367778  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1129 09:21:11.386772  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:21:11.404498  228280 provision.go:87] duration metric: took 839.125516ms to configureAuth
	I1129 09:21:11.404524  228280 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:21:11.404819  228280 config.go:182] Loaded profile config "old-k8s-version-071895": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:21:11.404835  228280 machine.go:97] duration metric: took 4.379454619s to provisionDockerMachine
	I1129 09:21:11.404843  228280 start.go:293] postStartSetup for "old-k8s-version-071895" (driver="docker")
	I1129 09:21:11.404858  228280 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:21:11.404914  228280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:21:11.404956  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	W1129 09:21:09.674698  222878 node_ready.go:57] node "no-preload-230403" has "Ready":"False" status (will retry)
	W1129 09:21:12.174772  222878 node_ready.go:57] node "no-preload-230403" has "Ready":"False" status (will retry)
	I1129 09:21:11.423195  228280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:21:11.529586  228280 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:21:11.533831  228280 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:21:11.533863  228280 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:21:11.533875  228280 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-2317/.minikube/addons for local assets ...
	I1129 09:21:11.533935  228280 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-2317/.minikube/files for local assets ...
	I1129 09:21:11.534036  228280 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem -> 41372.pem in /etc/ssl/certs
	I1129 09:21:11.534145  228280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:21:11.542551  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem --> /etc/ssl/certs/41372.pem (1708 bytes)
	I1129 09:21:11.561895  228280 start.go:296] duration metric: took 157.036248ms for postStartSetup
	I1129 09:21:11.561988  228280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:21:11.562045  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:11.579633  228280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:21:11.681878  228280 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:21:11.687047  228280 fix.go:56] duration metric: took 5.018931281s for fixHost
	I1129 09:21:11.687073  228280 start.go:83] releasing machines lock for "old-k8s-version-071895", held for 5.018983342s
	I1129 09:21:11.687157  228280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-071895
	I1129 09:21:11.704241  228280 ssh_runner.go:195] Run: cat /version.json
	I1129 09:21:11.704297  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:11.704492  228280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:21:11.704562  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:11.727286  228280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:21:11.734245  228280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:21:11.835008  228280 ssh_runner.go:195] Run: systemctl --version
	I1129 09:21:11.926191  228280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:21:11.930862  228280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:21:11.930931  228280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:21:11.939076  228280 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 09:21:11.939101  228280 start.go:496] detecting cgroup driver to use...
	I1129 09:21:11.939134  228280 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 09:21:11.939182  228280 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1129 09:21:11.957131  228280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1129 09:21:11.971869  228280 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:21:11.971945  228280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:21:11.987710  228280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:21:12.004024  228280 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:21:12.126126  228280 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:21:12.247513  228280 docker.go:234] disabling docker service ...
	I1129 09:21:12.247625  228280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:21:12.263457  228280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:21:12.277571  228280 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:21:12.404057  228280 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:21:12.517241  228280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:21:12.530926  228280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:21:12.546866  228280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1129 09:21:12.556657  228280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1129 09:21:12.566469  228280 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1129 09:21:12.566584  228280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1129 09:21:12.578618  228280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:21:12.588255  228280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1129 09:21:12.597871  228280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:21:12.607144  228280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:21:12.615376  228280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1129 09:21:12.625063  228280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1129 09:21:12.634874  228280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1129 09:21:12.644330  228280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:21:12.652107  228280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:21:12.660102  228280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:21:12.781142  228280 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1129 09:21:12.932754  228280 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1129 09:21:12.932865  228280 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1129 09:21:12.937243  228280 start.go:564] Will wait 60s for crictl version
	I1129 09:21:12.937348  228280 ssh_runner.go:195] Run: which crictl
	I1129 09:21:12.941539  228280 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:21:12.973098  228280 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1129 09:21:12.973218  228280 ssh_runner.go:195] Run: containerd --version
	I1129 09:21:12.993509  228280 ssh_runner.go:195] Run: containerd --version
	I1129 09:21:13.024405  228280 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1129 09:21:13.027409  228280 cli_runner.go:164] Run: docker network inspect old-k8s-version-071895 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:21:13.043452  228280 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 09:21:13.047557  228280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:21:13.057974  228280 kubeadm.go:884] updating cluster {Name:old-k8s-version-071895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-071895 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:21:13.058101  228280 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1129 09:21:13.058163  228280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:21:13.087299  228280 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:21:13.087321  228280 containerd.go:534] Images already preloaded, skipping extraction
	I1129 09:21:13.087382  228280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:21:13.113687  228280 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:21:13.113712  228280 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:21:13.113722  228280 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 containerd true true} ...
	I1129 09:21:13.113838  228280 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-071895 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-071895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:21:13.113913  228280 ssh_runner.go:195] Run: sudo crictl info
	I1129 09:21:13.140272  228280 cni.go:84] Creating CNI manager for ""
	I1129 09:21:13.140301  228280 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:21:13.140326  228280 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:21:13.140354  228280 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-071895 NodeName:old-k8s-version-071895 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:21:13.140483  228280 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-071895"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:21:13.140557  228280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1129 09:21:13.149717  228280 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:21:13.149814  228280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:21:13.157672  228280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1129 09:21:13.171306  228280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:21:13.185388  228280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1129 09:21:13.199265  228280 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:21:13.203237  228280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:21:13.214101  228280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:21:13.332118  228280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:21:13.348175  228280 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895 for IP: 192.168.76.2
	I1129 09:21:13.348246  228280 certs.go:195] generating shared ca certs ...
	I1129 09:21:13.348293  228280 certs.go:227] acquiring lock for ca certs: {Name:mke655c14945a8520f2f9de36531df923afb2bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:21:13.348496  228280 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-2317/.minikube/ca.key
	I1129 09:21:13.348592  228280 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.key
	I1129 09:21:13.348644  228280 certs.go:257] generating profile certs ...
	I1129 09:21:13.348787  228280 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/client.key
	I1129 09:21:13.348907  228280 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/apiserver.key.501f6453
	I1129 09:21:13.349002  228280 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/proxy-client.key
	I1129 09:21:13.349188  228280 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137.pem (1338 bytes)
	W1129 09:21:13.349262  228280 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137_empty.pem, impossibly tiny 0 bytes
	I1129 09:21:13.349292  228280 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:21:13.349362  228280 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem (1082 bytes)
	I1129 09:21:13.349433  228280 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:21:13.349480  228280 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem (1679 bytes)
	I1129 09:21:13.349603  228280 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem (1708 bytes)
	I1129 09:21:13.350469  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:21:13.376420  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1129 09:21:13.397277  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:21:13.421443  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:21:13.445862  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1129 09:21:13.471142  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:21:13.496107  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:21:13.521328  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:21:13.553546  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137.pem --> /usr/share/ca-certificates/4137.pem (1338 bytes)
	I1129 09:21:13.574753  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem --> /usr/share/ca-certificates/41372.pem (1708 bytes)
	I1129 09:21:13.595698  228280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:21:13.617559  228280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:21:13.632871  228280 ssh_runner.go:195] Run: openssl version
	I1129 09:21:13.640135  228280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4137.pem && ln -fs /usr/share/ca-certificates/4137.pem /etc/ssl/certs/4137.pem"
	I1129 09:21:13.649149  228280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4137.pem
	I1129 09:21:13.653500  228280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:36 /usr/share/ca-certificates/4137.pem
	I1129 09:21:13.653609  228280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4137.pem
	I1129 09:21:13.713432  228280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4137.pem /etc/ssl/certs/51391683.0"
	I1129 09:21:13.722822  228280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41372.pem && ln -fs /usr/share/ca-certificates/41372.pem /etc/ssl/certs/41372.pem"
	I1129 09:21:13.733400  228280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41372.pem
	I1129 09:21:13.737680  228280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:36 /usr/share/ca-certificates/41372.pem
	I1129 09:21:13.737792  228280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41372.pem
	I1129 09:21:13.780489  228280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41372.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:21:13.789820  228280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:21:13.798135  228280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:21:13.802169  228280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:21:13.802240  228280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:21:13.850151  228280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:21:13.858569  228280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:21:13.862729  228280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 09:21:13.905877  228280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 09:21:13.960878  228280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 09:21:14.005329  228280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 09:21:14.095967  228280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 09:21:14.153295  228280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 09:21:14.222693  228280 kubeadm.go:401] StartCluster: {Name:old-k8s-version-071895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-071895 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:21:14.222785  228280 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1129 09:21:14.222874  228280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:21:14.261358  228280 cri.go:89] found id: "ea08ec4514b5c17cbba723d8243367bf487a5f488d4baf7c51179fa441556160"
	I1129 09:21:14.261386  228280 cri.go:89] found id: "f8f1e6dc2605a052d9e0af268d75e52d11eef09c6da328c174daa4346e21359d"
	I1129 09:21:14.261401  228280 cri.go:89] found id: "359d9432ef4979d387512d5a2a5a3cd9fb7a0987f4a3540a23407b70f7faf163"
	I1129 09:21:14.261406  228280 cri.go:89] found id: "db1d77c6c85eaf5ebd7dc839fb54d40271ee80c34795b249a47534f35c064f1c"
	I1129 09:21:14.261409  228280 cri.go:89] found id: "000a8de26034dcdc6da38237d77f79fa914b3088e593f0bbd13e14b39b42bf00"
	I1129 09:21:14.261413  228280 cri.go:89] found id: "c6e9c9ab04ae16e634fbb9b4e1d16587356b43ecc4799412da2e56e79409870b"
	I1129 09:21:14.261416  228280 cri.go:89] found id: "41dff26eb8e679cc29a87f83f59d117073bdaeb9ac41cb8ac8ee1cb32c92511a"
	I1129 09:21:14.261448  228280 cri.go:89] found id: "d34a4ced6121deea5f0e58655a9a45e86fccdde412c9acf3d1e35ab330cd1b4b"
	I1129 09:21:14.261462  228280 cri.go:89] found id: "7c5e9c05d20b870a1e96cdb0bbf1479f013609a2bbcde73ff5f9b106d4a35049"
	I1129 09:21:14.261469  228280 cri.go:89] found id: ""
	I1129 09:21:14.261540  228280 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1129 09:21:14.301781  228280 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"72fdcef0fcf825d1763428531fe5a76f7bf57f324a3d7e86deedb167f50c3291","pid":916,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72fdcef0fcf825d1763428531fe5a76f7bf57f324a3d7e86deedb167f50c3291","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72fdcef0fcf825d1763428531fe5a76f7bf57f324a3d7e86deedb167f50c3291/rootfs","created":"2025-11-29T09:21:14.203252585Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"72fdcef0fcf825d1763428531fe5a76f7bf57f324a3d7e86deedb167f50c3291","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-071895_a96342591fc7bb3ae41b190b02d65234","io.kubernetes.cri.sandbox-mem
ory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-old-k8s-version-071895","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a96342591fc7bb3ae41b190b02d65234"},"owner":"root"},{"ociVersion":"1.2.1","id":"b5331b538c679583194dc7e0747d914383216f1d9db602d35755e363247944d6","pid":820,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b5331b538c679583194dc7e0747d914383216f1d9db602d35755e363247944d6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b5331b538c679583194dc7e0747d914383216f1d9db602d35755e363247944d6/rootfs","created":"2025-11-29T09:21:14.035519986Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"b5331b538c679583194dc7e0747d914383216f1d9db602d35755e363247944d6","io.kubernet
es.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-071895_ef3ada0e43d54ea2068060e8f13708f8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-old-k8s-version-071895","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ef3ada0e43d54ea2068060e8f13708f8"},"owner":"root"},{"ociVersion":"1.2.1","id":"fa4f421c6750997b68993e039d22e81efcd9c5b6eab09893bcdc6d6061bab49b","pid":929,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa4f421c6750997b68993e039d22e81efcd9c5b6eab09893bcdc6d6061bab49b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa4f421c6750997b68993e039d22e81efcd9c5b6eab09893bcdc6d6061bab49b/rootfs","created":"2025-11-29T09:21:14.239088381Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.s
andbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"fa4f421c6750997b68993e039d22e81efcd9c5b6eab09893bcdc6d6061bab49b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-071895_8b6ba97137797f9d8d5bef81cd980a7a","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-old-k8s-version-071895","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8b6ba97137797f9d8d5bef81cd980a7a"},"owner":"root"}]
	I1129 09:21:14.301935  228280 cri.go:126] list returned 3 containers
	I1129 09:21:14.301953  228280 cri.go:129] container: {ID:72fdcef0fcf825d1763428531fe5a76f7bf57f324a3d7e86deedb167f50c3291 Status:created}
	I1129 09:21:14.301981  228280 cri.go:131] skipping 72fdcef0fcf825d1763428531fe5a76f7bf57f324a3d7e86deedb167f50c3291 - not in ps
	I1129 09:21:14.301995  228280 cri.go:129] container: {ID:b5331b538c679583194dc7e0747d914383216f1d9db602d35755e363247944d6 Status:running}
	I1129 09:21:14.302002  228280 cri.go:131] skipping b5331b538c679583194dc7e0747d914383216f1d9db602d35755e363247944d6 - not in ps
	I1129 09:21:14.302007  228280 cri.go:129] container: {ID:fa4f421c6750997b68993e039d22e81efcd9c5b6eab09893bcdc6d6061bab49b Status:created}
	I1129 09:21:14.302016  228280 cri.go:131] skipping fa4f421c6750997b68993e039d22e81efcd9c5b6eab09893bcdc6d6061bab49b - not in ps
	I1129 09:21:14.302084  228280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:21:14.326916  228280 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 09:21:14.326935  228280 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 09:21:14.327012  228280 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 09:21:14.356154  228280 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:21:14.356847  228280 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-071895" does not appear in /home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:21:14.357148  228280 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-2317/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-071895" cluster setting kubeconfig missing "old-k8s-version-071895" context setting]
	I1129 09:21:14.357658  228280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/kubeconfig: {Name:mk3c09eb9158ba85342a695b6ac4b1a5f69e1b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:21:14.359252  228280 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 09:21:14.381505  228280 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1129 09:21:14.381540  228280 kubeadm.go:602] duration metric: took 54.598987ms to restartPrimaryControlPlane
	I1129 09:21:14.381585  228280 kubeadm.go:403] duration metric: took 158.875232ms to StartCluster
	I1129 09:21:14.381605  228280 settings.go:142] acquiring lock: {Name:mk44917d1324740eeda65bf3aa312ad1561d3ed4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:21:14.381692  228280 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:21:14.382612  228280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/kubeconfig: {Name:mk3c09eb9158ba85342a695b6ac4b1a5f69e1b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:21:14.382863  228280 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:21:14.383259  228280 config.go:182] Loaded profile config "old-k8s-version-071895": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:21:14.383290  228280 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:21:14.383388  228280 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-071895"
	I1129 09:21:14.383405  228280 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-071895"
	W1129 09:21:14.383411  228280 addons.go:248] addon storage-provisioner should already be in state true
	I1129 09:21:14.383413  228280 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-071895"
	I1129 09:21:14.383434  228280 host.go:66] Checking if "old-k8s-version-071895" exists ...
	I1129 09:21:14.383434  228280 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-071895"
	I1129 09:21:14.383754  228280 cli_runner.go:164] Run: docker container inspect old-k8s-version-071895 --format={{.State.Status}}
	I1129 09:21:14.383866  228280 cli_runner.go:164] Run: docker container inspect old-k8s-version-071895 --format={{.State.Status}}
	I1129 09:21:14.384331  228280 addons.go:70] Setting metrics-server=true in profile "old-k8s-version-071895"
	I1129 09:21:14.384357  228280 addons.go:239] Setting addon metrics-server=true in "old-k8s-version-071895"
	W1129 09:21:14.384377  228280 addons.go:248] addon metrics-server should already be in state true
	I1129 09:21:14.384406  228280 host.go:66] Checking if "old-k8s-version-071895" exists ...
	I1129 09:21:14.384884  228280 cli_runner.go:164] Run: docker container inspect old-k8s-version-071895 --format={{.State.Status}}
	I1129 09:21:14.387184  228280 addons.go:70] Setting dashboard=true in profile "old-k8s-version-071895"
	I1129 09:21:14.387440  228280 addons.go:239] Setting addon dashboard=true in "old-k8s-version-071895"
	W1129 09:21:14.387461  228280 addons.go:248] addon dashboard should already be in state true
	I1129 09:21:14.387493  228280 host.go:66] Checking if "old-k8s-version-071895" exists ...
	I1129 09:21:14.387982  228280 cli_runner.go:164] Run: docker container inspect old-k8s-version-071895 --format={{.State.Status}}
	I1129 09:21:14.399445  228280 out.go:179] * Verifying Kubernetes components...
	I1129 09:21:14.405460  228280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:21:14.448731  228280 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-071895"
	W1129 09:21:14.448755  228280 addons.go:248] addon default-storageclass should already be in state true
	I1129 09:21:14.448780  228280 host.go:66] Checking if "old-k8s-version-071895" exists ...
	I1129 09:21:14.449192  228280 cli_runner.go:164] Run: docker container inspect old-k8s-version-071895 --format={{.State.Status}}
	I1129 09:21:14.462410  228280 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1129 09:21:14.465662  228280 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 09:21:14.468736  228280 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1129 09:21:14.468957  228280 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:21:14.468998  228280 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 09:21:14.470223  228280 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 09:21:14.470302  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:14.470945  228280 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1129 09:21:14.470967  228280 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1129 09:21:14.471023  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:14.474914  228280 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:21:14.474939  228280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:21:14.475005  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:14.512791  228280 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:21:14.512815  228280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:21:14.512893  228280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-071895
	I1129 09:21:14.545902  228280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:21:14.564885  228280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:21:14.577463  228280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:21:14.584658  228280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/old-k8s-version-071895/id_rsa Username:docker}
	I1129 09:21:14.798852  228280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:21:14.923287  228280 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-071895" to be "Ready" ...
	I1129 09:21:15.123215  228280 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1129 09:21:15.123241  228280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1129 09:21:15.268321  228280 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1129 09:21:15.268349  228280 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1129 09:21:15.273659  228280 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 09:21:15.273687  228280 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 09:21:15.309135  228280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:21:15.367180  228280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:21:15.371388  228280 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1129 09:21:15.371415  228280 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1129 09:21:15.414920  228280 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 09:21:15.414946  228280 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 09:21:15.470944  228280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1129 09:21:15.561828  228280 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 09:21:15.561856  228280 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 09:21:15.845360  228280 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 09:21:15.845380  228280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 09:21:15.883333  228280 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 09:21:15.883357  228280 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 09:21:15.912154  228280 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 09:21:15.912177  228280 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 09:21:15.935219  228280 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 09:21:15.935266  228280 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 09:21:15.970998  228280 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 09:21:15.971021  228280 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 09:21:16.150201  228280 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 09:21:16.150226  228280 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 09:21:16.311576  228280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1129 09:21:14.174893  222878 node_ready.go:57] node "no-preload-230403" has "Ready":"False" status (will retry)
	I1129 09:21:15.674937  222878 node_ready.go:49] node "no-preload-230403" is "Ready"
	I1129 09:21:15.674965  222878 node_ready.go:38] duration metric: took 12.503411882s for node "no-preload-230403" to be "Ready" ...
	I1129 09:21:15.674979  222878 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:21:15.675039  222878 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:21:15.692194  222878 api_server.go:72] duration metric: took 13.618713226s to wait for apiserver process to appear ...
	I1129 09:21:15.692218  222878 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:21:15.692237  222878 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:21:15.700495  222878 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1129 09:21:15.701610  222878 api_server.go:141] control plane version: v1.34.1
	I1129 09:21:15.701671  222878 api_server.go:131] duration metric: took 9.446138ms to wait for apiserver health ...
	I1129 09:21:15.701703  222878 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:21:15.710405  222878 system_pods.go:59] 8 kube-system pods found
	I1129 09:21:15.710508  222878 system_pods.go:61] "coredns-66bc5c9577-6sxgs" [8966af76-b077-4486-af59-aced26be0a08] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:21:15.710536  222878 system_pods.go:61] "etcd-no-preload-230403" [06b0c9a5-89aa-4112-b1c3-a7e9a015aebd] Running
	I1129 09:21:15.710564  222878 system_pods.go:61] "kindnet-9vm4c" [1aa125e0-c584-41e4-8b34-60b0e868cd6a] Running
	I1129 09:21:15.710597  222878 system_pods.go:61] "kube-apiserver-no-preload-230403" [7c846e37-1b9e-46c5-83de-1f89a235429f] Running
	I1129 09:21:15.710621  222878 system_pods.go:61] "kube-controller-manager-no-preload-230403" [575a09ff-7c65-41f4-a394-39fede64fc46] Running
	I1129 09:21:15.710640  222878 system_pods.go:61] "kube-proxy-dk26g" [49e4de55-0854-4676-bee9-e107a3b5fae6] Running
	I1129 09:21:15.710677  222878 system_pods.go:61] "kube-scheduler-no-preload-230403" [6ea14dce-1037-4a73-b15a-3a88d98ae0c1] Running
	I1129 09:21:15.710712  222878 system_pods.go:61] "storage-provisioner" [bcd1577c-a3b1-415a-b6fe-ddc56dd52128] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:21:15.710736  222878 system_pods.go:74] duration metric: took 9.013872ms to wait for pod list to return data ...
	I1129 09:21:15.710761  222878 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:21:15.714856  222878 default_sa.go:45] found service account: "default"
	I1129 09:21:15.714882  222878 default_sa.go:55] duration metric: took 4.101301ms for default service account to be created ...
	I1129 09:21:15.714893  222878 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:21:15.718966  222878 system_pods.go:86] 8 kube-system pods found
	I1129 09:21:15.719052  222878 system_pods.go:89] "coredns-66bc5c9577-6sxgs" [8966af76-b077-4486-af59-aced26be0a08] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:21:15.719074  222878 system_pods.go:89] "etcd-no-preload-230403" [06b0c9a5-89aa-4112-b1c3-a7e9a015aebd] Running
	I1129 09:21:15.719111  222878 system_pods.go:89] "kindnet-9vm4c" [1aa125e0-c584-41e4-8b34-60b0e868cd6a] Running
	I1129 09:21:15.719136  222878 system_pods.go:89] "kube-apiserver-no-preload-230403" [7c846e37-1b9e-46c5-83de-1f89a235429f] Running
	I1129 09:21:15.719157  222878 system_pods.go:89] "kube-controller-manager-no-preload-230403" [575a09ff-7c65-41f4-a394-39fede64fc46] Running
	I1129 09:21:15.719179  222878 system_pods.go:89] "kube-proxy-dk26g" [49e4de55-0854-4676-bee9-e107a3b5fae6] Running
	I1129 09:21:15.719213  222878 system_pods.go:89] "kube-scheduler-no-preload-230403" [6ea14dce-1037-4a73-b15a-3a88d98ae0c1] Running
	I1129 09:21:15.719238  222878 system_pods.go:89] "storage-provisioner" [bcd1577c-a3b1-415a-b6fe-ddc56dd52128] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:21:15.719286  222878 retry.go:31] will retry after 300.87979ms: missing components: kube-dns
	I1129 09:21:16.024949  222878 system_pods.go:86] 8 kube-system pods found
	I1129 09:21:16.025038  222878 system_pods.go:89] "coredns-66bc5c9577-6sxgs" [8966af76-b077-4486-af59-aced26be0a08] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:21:16.025062  222878 system_pods.go:89] "etcd-no-preload-230403" [06b0c9a5-89aa-4112-b1c3-a7e9a015aebd] Running
	I1129 09:21:16.025105  222878 system_pods.go:89] "kindnet-9vm4c" [1aa125e0-c584-41e4-8b34-60b0e868cd6a] Running
	I1129 09:21:16.025146  222878 system_pods.go:89] "kube-apiserver-no-preload-230403" [7c846e37-1b9e-46c5-83de-1f89a235429f] Running
	I1129 09:21:16.025175  222878 system_pods.go:89] "kube-controller-manager-no-preload-230403" [575a09ff-7c65-41f4-a394-39fede64fc46] Running
	I1129 09:21:16.025196  222878 system_pods.go:89] "kube-proxy-dk26g" [49e4de55-0854-4676-bee9-e107a3b5fae6] Running
	I1129 09:21:16.025226  222878 system_pods.go:89] "kube-scheduler-no-preload-230403" [6ea14dce-1037-4a73-b15a-3a88d98ae0c1] Running
	I1129 09:21:16.025255  222878 system_pods.go:89] "storage-provisioner" [bcd1577c-a3b1-415a-b6fe-ddc56dd52128] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:21:16.025288  222878 retry.go:31] will retry after 370.333858ms: missing components: kube-dns
	I1129 09:21:16.401237  222878 system_pods.go:86] 8 kube-system pods found
	I1129 09:21:16.401318  222878 system_pods.go:89] "coredns-66bc5c9577-6sxgs" [8966af76-b077-4486-af59-aced26be0a08] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:21:16.401341  222878 system_pods.go:89] "etcd-no-preload-230403" [06b0c9a5-89aa-4112-b1c3-a7e9a015aebd] Running
	I1129 09:21:16.401382  222878 system_pods.go:89] "kindnet-9vm4c" [1aa125e0-c584-41e4-8b34-60b0e868cd6a] Running
	I1129 09:21:16.401409  222878 system_pods.go:89] "kube-apiserver-no-preload-230403" [7c846e37-1b9e-46c5-83de-1f89a235429f] Running
	I1129 09:21:16.401433  222878 system_pods.go:89] "kube-controller-manager-no-preload-230403" [575a09ff-7c65-41f4-a394-39fede64fc46] Running
	I1129 09:21:16.401452  222878 system_pods.go:89] "kube-proxy-dk26g" [49e4de55-0854-4676-bee9-e107a3b5fae6] Running
	I1129 09:21:16.401483  222878 system_pods.go:89] "kube-scheduler-no-preload-230403" [6ea14dce-1037-4a73-b15a-3a88d98ae0c1] Running
	I1129 09:21:16.401511  222878 system_pods.go:89] "storage-provisioner" [bcd1577c-a3b1-415a-b6fe-ddc56dd52128] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:21:16.401541  222878 retry.go:31] will retry after 454.806267ms: missing components: kube-dns
	I1129 09:21:16.860495  222878 system_pods.go:86] 8 kube-system pods found
	I1129 09:21:16.860582  222878 system_pods.go:89] "coredns-66bc5c9577-6sxgs" [8966af76-b077-4486-af59-aced26be0a08] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:21:16.860606  222878 system_pods.go:89] "etcd-no-preload-230403" [06b0c9a5-89aa-4112-b1c3-a7e9a015aebd] Running
	I1129 09:21:16.860648  222878 system_pods.go:89] "kindnet-9vm4c" [1aa125e0-c584-41e4-8b34-60b0e868cd6a] Running
	I1129 09:21:16.860677  222878 system_pods.go:89] "kube-apiserver-no-preload-230403" [7c846e37-1b9e-46c5-83de-1f89a235429f] Running
	I1129 09:21:16.860702  222878 system_pods.go:89] "kube-controller-manager-no-preload-230403" [575a09ff-7c65-41f4-a394-39fede64fc46] Running
	I1129 09:21:16.860724  222878 system_pods.go:89] "kube-proxy-dk26g" [49e4de55-0854-4676-bee9-e107a3b5fae6] Running
	I1129 09:21:16.860758  222878 system_pods.go:89] "kube-scheduler-no-preload-230403" [6ea14dce-1037-4a73-b15a-3a88d98ae0c1] Running
	I1129 09:21:16.860784  222878 system_pods.go:89] "storage-provisioner" [bcd1577c-a3b1-415a-b6fe-ddc56dd52128] Running
	I1129 09:21:16.860809  222878 system_pods.go:126] duration metric: took 1.145909329s to wait for k8s-apps to be running ...
	I1129 09:21:16.860831  222878 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:21:16.860919  222878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:21:16.884526  222878 system_svc.go:56] duration metric: took 23.687382ms WaitForService to wait for kubelet
	I1129 09:21:16.884595  222878 kubeadm.go:587] duration metric: took 14.811118806s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:21:16.884701  222878 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:21:16.893725  222878 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 09:21:16.893808  222878 node_conditions.go:123] node cpu capacity is 2
	I1129 09:21:16.893837  222878 node_conditions.go:105] duration metric: took 9.111326ms to run NodePressure ...
	I1129 09:21:16.893880  222878 start.go:242] waiting for startup goroutines ...
	I1129 09:21:16.893906  222878 start.go:247] waiting for cluster config update ...
	I1129 09:21:16.893932  222878 start.go:256] writing updated cluster config ...
	I1129 09:21:16.894255  222878 ssh_runner.go:195] Run: rm -f paused
	I1129 09:21:16.902798  222878 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:21:16.906667  222878 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6sxgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:17.914278  222878 pod_ready.go:94] pod "coredns-66bc5c9577-6sxgs" is "Ready"
	I1129 09:21:17.914336  222878 pod_ready.go:86] duration metric: took 1.007600556s for pod "coredns-66bc5c9577-6sxgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:17.920245  222878 pod_ready.go:83] waiting for pod "etcd-no-preload-230403" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:17.929670  222878 pod_ready.go:94] pod "etcd-no-preload-230403" is "Ready"
	I1129 09:21:17.929693  222878 pod_ready.go:86] duration metric: took 9.427397ms for pod "etcd-no-preload-230403" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:17.932241  222878 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-230403" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:17.941780  222878 pod_ready.go:94] pod "kube-apiserver-no-preload-230403" is "Ready"
	I1129 09:21:17.941862  222878 pod_ready.go:86] duration metric: took 9.600141ms for pod "kube-apiserver-no-preload-230403" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:17.944516  222878 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-230403" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:18.111607  222878 pod_ready.go:94] pod "kube-controller-manager-no-preload-230403" is "Ready"
	I1129 09:21:18.111682  222878 pod_ready.go:86] duration metric: took 167.098904ms for pod "kube-controller-manager-no-preload-230403" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:18.310592  222878 pod_ready.go:83] waiting for pod "kube-proxy-dk26g" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:18.710865  222878 pod_ready.go:94] pod "kube-proxy-dk26g" is "Ready"
	I1129 09:21:18.710888  222878 pod_ready.go:86] duration metric: took 400.2757ms for pod "kube-proxy-dk26g" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:18.911663  222878 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-230403" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:19.311067  222878 pod_ready.go:94] pod "kube-scheduler-no-preload-230403" is "Ready"
	I1129 09:21:19.311091  222878 pod_ready.go:86] duration metric: took 399.404368ms for pod "kube-scheduler-no-preload-230403" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:21:19.311105  222878 pod_ready.go:40] duration metric: took 2.408258126s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:21:19.384536  222878 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 09:21:19.387928  222878 out.go:179] * Done! kubectl is now configured to use "no-preload-230403" cluster and "default" namespace by default
	I1129 09:21:20.420240  228280 node_ready.go:49] node "old-k8s-version-071895" is "Ready"
	I1129 09:21:20.420273  228280 node_ready.go:38] duration metric: took 5.496885787s for node "old-k8s-version-071895" to be "Ready" ...
	I1129 09:21:20.420288  228280 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:21:20.420349  228280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:21:23.273825  228280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.964648021s)
	I1129 09:21:23.273903  228280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.906700567s)
	I1129 09:21:23.311554  228280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.840569332s)
	I1129 09:21:23.311587  228280 addons.go:495] Verifying addon metrics-server=true in "old-k8s-version-071895"
	I1129 09:21:23.832919  228280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.521287713s)
	I1129 09:21:23.833085  228280 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.412717729s)
	I1129 09:21:23.833108  228280 api_server.go:72] duration metric: took 9.4502133s to wait for apiserver process to appear ...
	I1129 09:21:23.833115  228280 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:21:23.833137  228280 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:21:23.835874  228280 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-071895 addons enable metrics-server
	
	I1129 09:21:23.838864  228280 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1129 09:21:23.841692  228280 addons.go:530] duration metric: took 9.45840334s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1129 09:21:23.849959  228280 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 09:21:23.851407  228280 api_server.go:141] control plane version: v1.28.0
	I1129 09:21:23.851431  228280 api_server.go:131] duration metric: took 18.306187ms to wait for apiserver health ...
	I1129 09:21:23.851440  228280 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:21:23.859613  228280 system_pods.go:59] 9 kube-system pods found
	I1129 09:21:23.859712  228280 system_pods.go:61] "coredns-5dd5756b68-htmzr" [c6b5f2ee-df4f-40a3-be2e-6f16e858e497] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:21:23.859736  228280 system_pods.go:61] "etcd-old-k8s-version-071895" [79f6e3b1-4d0e-480f-ba81-e9c28edc83ac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:21:23.859786  228280 system_pods.go:61] "kindnet-58g5f" [d4743cee-0834-4a44-9cf7-d0228aa5b843] Running
	I1129 09:21:23.859814  228280 system_pods.go:61] "kube-apiserver-old-k8s-version-071895" [81748b80-7ec0-4a82-b646-673534a05137] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:21:23.859854  228280 system_pods.go:61] "kube-controller-manager-old-k8s-version-071895" [b6691622-dfbd-4b77-bedd-c7a97120a360] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:21:23.859878  228280 system_pods.go:61] "kube-proxy-4jxrn" [3e4bdb82-85e5-468b-80dc-0481c990f117] Running
	I1129 09:21:23.859903  228280 system_pods.go:61] "kube-scheduler-old-k8s-version-071895" [fe7f98e1-0743-41d8-869a-4807c081f621] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:21:23.859941  228280 system_pods.go:61] "metrics-server-57f55c9bc5-mfbx8" [a63508cb-d063-4356-aada-0caa5d3c29f4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 09:21:23.859965  228280 system_pods.go:61] "storage-provisioner" [784fe707-ae15-4eae-a70c-ec084ce3d812] Running
	I1129 09:21:23.859987  228280 system_pods.go:74] duration metric: took 8.540162ms to wait for pod list to return data ...
	I1129 09:21:23.860029  228280 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:21:23.863876  228280 default_sa.go:45] found service account: "default"
	I1129 09:21:23.863946  228280 default_sa.go:55] duration metric: took 3.89243ms for default service account to be created ...
	I1129 09:21:23.863970  228280 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:21:23.867941  228280 system_pods.go:86] 9 kube-system pods found
	I1129 09:21:23.868025  228280 system_pods.go:89] "coredns-5dd5756b68-htmzr" [c6b5f2ee-df4f-40a3-be2e-6f16e858e497] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:21:23.868051  228280 system_pods.go:89] "etcd-old-k8s-version-071895" [79f6e3b1-4d0e-480f-ba81-e9c28edc83ac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:21:23.868091  228280 system_pods.go:89] "kindnet-58g5f" [d4743cee-0834-4a44-9cf7-d0228aa5b843] Running
	I1129 09:21:23.868124  228280 system_pods.go:89] "kube-apiserver-old-k8s-version-071895" [81748b80-7ec0-4a82-b646-673534a05137] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:21:23.868148  228280 system_pods.go:89] "kube-controller-manager-old-k8s-version-071895" [b6691622-dfbd-4b77-bedd-c7a97120a360] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:21:23.868183  228280 system_pods.go:89] "kube-proxy-4jxrn" [3e4bdb82-85e5-468b-80dc-0481c990f117] Running
	I1129 09:21:23.868210  228280 system_pods.go:89] "kube-scheduler-old-k8s-version-071895" [fe7f98e1-0743-41d8-869a-4807c081f621] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:21:23.868237  228280 system_pods.go:89] "metrics-server-57f55c9bc5-mfbx8" [a63508cb-d063-4356-aada-0caa5d3c29f4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 09:21:23.868276  228280 system_pods.go:89] "storage-provisioner" [784fe707-ae15-4eae-a70c-ec084ce3d812] Running
	I1129 09:21:23.868305  228280 system_pods.go:126] duration metric: took 4.315341ms to wait for k8s-apps to be running ...
	I1129 09:21:23.868330  228280 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:21:23.868419  228280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:21:23.897942  228280 system_svc.go:56] duration metric: took 29.605398ms WaitForService to wait for kubelet
	I1129 09:21:23.898018  228280 kubeadm.go:587] duration metric: took 9.515121994s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:21:23.898055  228280 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:21:23.901199  228280 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 09:21:23.901293  228280 node_conditions.go:123] node cpu capacity is 2
	I1129 09:21:23.901336  228280 node_conditions.go:105] duration metric: took 3.260828ms to run NodePressure ...
	I1129 09:21:23.901381  228280 start.go:242] waiting for startup goroutines ...
	I1129 09:21:23.901407  228280 start.go:247] waiting for cluster config update ...
	I1129 09:21:23.901438  228280 start.go:256] writing updated cluster config ...
	I1129 09:21:23.901819  228280 ssh_runner.go:195] Run: rm -f paused
	I1129 09:21:23.906702  228280 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:21:23.911977  228280 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-htmzr" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 09:21:25.918419  228280 pod_ready.go:104] pod "coredns-5dd5756b68-htmzr" is not "Ready", error: <nil>
	W1129 09:21:28.418544  228280 pod_ready.go:104] pod "coredns-5dd5756b68-htmzr" is not "Ready", error: <nil>
	W1129 09:21:30.419616  228280 pod_ready.go:104] pod "coredns-5dd5756b68-htmzr" is not "Ready", error: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	08c9ce666df15       1611cd07b61d5       9 seconds ago       Running             busybox                   0                   452518fcef024       busybox                                     default
	05a4b0a308f52       138784d87c9c5       15 seconds ago      Running             coredns                   0                   9bc59d30b25c6       coredns-66bc5c9577-6sxgs                    kube-system
	f4a2aa0118a93       66749159455b3       15 seconds ago      Running             storage-provisioner       0                   4fc3bcb6b693a       storage-provisioner                         kube-system
	3992d8d87604b       b1a8c6f707935       27 seconds ago      Running             kindnet-cni               0                   5e599f3246de3       kindnet-9vm4c                               kube-system
	15db02d9b0c38       05baa95f5142d       29 seconds ago      Running             kube-proxy                0                   170ba5c7b589b       kube-proxy-dk26g                            kube-system
	529015092ff84       a1894772a478e       45 seconds ago      Running             etcd                      0                   f344e0133b6e2       etcd-no-preload-230403                      kube-system
	442ee1b81cec3       43911e833d64d       45 seconds ago      Running             kube-apiserver            0                   10242e0068737       kube-apiserver-no-preload-230403            kube-system
	ccc47fd0affc1       b5f57ec6b9867       46 seconds ago      Running             kube-scheduler            0                   f8601b21e1d7e       kube-scheduler-no-preload-230403            kube-system
	31108b7632b14       7eb2c6ff0c5a7       46 seconds ago      Running             kube-controller-manager   0                   b6fb24324860b       kube-controller-manager-no-preload-230403   kube-system
	
	
	==> containerd <==
	Nov 29 09:21:16 no-preload-230403 containerd[757]: time="2025-11-29T09:21:16.220052767Z" level=info msg="CreateContainer within sandbox \"4fc3bcb6b693a563066d949dd7c4dcd71c3142a1d5504e91c36acb42981db35c\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"f4a2aa0118a93c01aee62976d9a1b1d28e43e4822b829e507a3bf3a58b4f5243\""
	Nov 29 09:21:16 no-preload-230403 containerd[757]: time="2025-11-29T09:21:16.224516868Z" level=info msg="StartContainer for \"f4a2aa0118a93c01aee62976d9a1b1d28e43e4822b829e507a3bf3a58b4f5243\""
	Nov 29 09:21:16 no-preload-230403 containerd[757]: time="2025-11-29T09:21:16.227535808Z" level=info msg="connecting to shim f4a2aa0118a93c01aee62976d9a1b1d28e43e4822b829e507a3bf3a58b4f5243" address="unix:///run/containerd/s/eafa56d42b15a5cf94a38b30c23a63af09ffac47ce47a6a134da11cc55fb5bd2" protocol=ttrpc version=3
	Nov 29 09:21:16 no-preload-230403 containerd[757]: time="2025-11-29T09:21:16.236891368Z" level=info msg="Container 05a4b0a308f52d4430467a7c8a19d4c9f59139df7163eae781089f10995b9e43: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:21:16 no-preload-230403 containerd[757]: time="2025-11-29T09:21:16.270027037Z" level=info msg="CreateContainer within sandbox \"9bc59d30b25c6193275626374070a2fc66d9e237103ce720a16bb2be86d337a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"05a4b0a308f52d4430467a7c8a19d4c9f59139df7163eae781089f10995b9e43\""
	Nov 29 09:21:16 no-preload-230403 containerd[757]: time="2025-11-29T09:21:16.276320556Z" level=info msg="StartContainer for \"05a4b0a308f52d4430467a7c8a19d4c9f59139df7163eae781089f10995b9e43\""
	Nov 29 09:21:16 no-preload-230403 containerd[757]: time="2025-11-29T09:21:16.277331048Z" level=info msg="connecting to shim 05a4b0a308f52d4430467a7c8a19d4c9f59139df7163eae781089f10995b9e43" address="unix:///run/containerd/s/4150c600906c8fc2ff8fd225298ba8f9b7d7a162a4870c27a853e5d03d0ed27a" protocol=ttrpc version=3
	Nov 29 09:21:16 no-preload-230403 containerd[757]: time="2025-11-29T09:21:16.411382064Z" level=info msg="StartContainer for \"f4a2aa0118a93c01aee62976d9a1b1d28e43e4822b829e507a3bf3a58b4f5243\" returns successfully"
	Nov 29 09:21:16 no-preload-230403 containerd[757]: time="2025-11-29T09:21:16.474267681Z" level=info msg="StartContainer for \"05a4b0a308f52d4430467a7c8a19d4c9f59139df7163eae781089f10995b9e43\" returns successfully"
	Nov 29 09:21:20 no-preload-230403 containerd[757]: time="2025-11-29T09:21:20.020427571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:687a18aa-1034-4892-9b86-c0ee20e62df3,Namespace:default,Attempt:0,}"
	Nov 29 09:21:20 no-preload-230403 containerd[757]: time="2025-11-29T09:21:20.098047678Z" level=info msg="connecting to shim 452518fcef02414ebd4d2194884e69aed409ab705db76708019b862cd80f24a4" address="unix:///run/containerd/s/c4310d71ae357f5f49d830cc9014c370440bbf503c78a07ec64e6309b208eb9f" namespace=k8s.io protocol=ttrpc version=3
	Nov 29 09:21:20 no-preload-230403 containerd[757]: time="2025-11-29T09:21:20.226571275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:687a18aa-1034-4892-9b86-c0ee20e62df3,Namespace:default,Attempt:0,} returns sandbox id \"452518fcef02414ebd4d2194884e69aed409ab705db76708019b862cd80f24a4\""
	Nov 29 09:21:20 no-preload-230403 containerd[757]: time="2025-11-29T09:21:20.236151969Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.370559585Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.372666224Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937184"
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.375511525Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.378699598Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.379679936Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.14330372s"
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.379807387Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.390961047Z" level=info msg="CreateContainer within sandbox \"452518fcef02414ebd4d2194884e69aed409ab705db76708019b862cd80f24a4\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.416856811Z" level=info msg="Container 08c9ce666df156d70c72e2b14ec76fdf287b74c34f9ff9b0d13e6b44906d13a8: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.428490260Z" level=info msg="CreateContainer within sandbox \"452518fcef02414ebd4d2194884e69aed409ab705db76708019b862cd80f24a4\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"08c9ce666df156d70c72e2b14ec76fdf287b74c34f9ff9b0d13e6b44906d13a8\""
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.430431441Z" level=info msg="StartContainer for \"08c9ce666df156d70c72e2b14ec76fdf287b74c34f9ff9b0d13e6b44906d13a8\""
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.432041264Z" level=info msg="connecting to shim 08c9ce666df156d70c72e2b14ec76fdf287b74c34f9ff9b0d13e6b44906d13a8" address="unix:///run/containerd/s/c4310d71ae357f5f49d830cc9014c370440bbf503c78a07ec64e6309b208eb9f" protocol=ttrpc version=3
	Nov 29 09:21:22 no-preload-230403 containerd[757]: time="2025-11-29T09:21:22.551057718Z" level=info msg="StartContainer for \"08c9ce666df156d70c72e2b14ec76fdf287b74c34f9ff9b0d13e6b44906d13a8\" returns successfully"
	
	
	==> coredns [05a4b0a308f52d4430467a7c8a19d4c9f59139df7163eae781089f10995b9e43] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39228 - 31855 "HINFO IN 688778656227592227.3077470296714571997. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006813432s
	
	
	==> describe nodes <==
	Name:               no-preload-230403
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-230403
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=no-preload-230403
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_20_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:20:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-230403
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:21:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:21:28 +0000   Sat, 29 Nov 2025 09:20:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:21:28 +0000   Sat, 29 Nov 2025 09:20:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:21:28 +0000   Sat, 29 Nov 2025 09:20:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:21:28 +0000   Sat, 29 Nov 2025 09:21:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-230403
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                bf89642e-03f0-40bb-a2b9-6ab8c2e41ff2
	  Boot ID:                    6647f078-4edd-40c5-9d0e-49eb5ed00bd7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-6sxgs                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-230403                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-9vm4c                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-230403             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-230403    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-dk26g                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-230403             100m (5%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Warning  CgroupV1                 47s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node no-preload-230403 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node no-preload-230403 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     47s (x7 over 47s)  kubelet          Node no-preload-230403 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  47s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  35s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node no-preload-230403 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node no-preload-230403 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node no-preload-230403 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                node-controller  Node no-preload-230403 event: Registered Node no-preload-230403 in Controller
	  Normal   NodeReady                17s                kubelet          Node no-preload-230403 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014634] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.570975] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032231] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767655] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.282538] kauditd_printk_skb: 36 callbacks suppressed
	[Nov29 08:39] FS-Cache: Duplicate cookie detected
	[  +0.000727] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001077] FS-Cache: O-cookie d=00000000b08097f7{9P.session} n=00000000a17ba85f
	[  +0.001074] FS-Cache: O-key=[10] '34323935323231393134'
	[  +0.000776] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000954] FS-Cache: N-cookie d=00000000b08097f7{9P.session} n=00000000534469ad
	[  +0.001092] FS-Cache: N-key=[10] '34323935323231393134'
	[Nov29 09:19] hrtimer: interrupt took 12545193 ns
	
	
	==> etcd [529015092ff84e3cfc1541604b0727773d20e63ff6780bb1fe9bd43be34d1e64] <==
	{"level":"warn","ts":"2025-11-29T09:20:50.547814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.577884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.621143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.659227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.725150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.729920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.762120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.780569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.810851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.851416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.869633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.926146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:50.993272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.038405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.143824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.180587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.204757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.227046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.269999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.324222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.380218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.429364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.461214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:20:51.716904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33284","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-29T09:20:55.227481Z","caller":"traceutil/trace.go:172","msg":"trace[583648155] transaction","detail":"{read_only:false; response_revision:135; number_of_response:1; }","duration":"101.34ms","start":"2025-11-29T09:20:55.126123Z","end":"2025-11-29T09:20:55.227463Z","steps":["trace[583648155] 'process raft request'  (duration: 60.728921ms)","trace[583648155] 'compare'  (duration: 40.244176ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:21:32 up  1:04,  0 user,  load average: 4.53, 3.08, 2.75
	Linux no-preload-230403 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3992d8d87604b5cf88e1aa999f8c6313f0b01d8c4b61b71819a8390069e32b57] <==
	I1129 09:21:05.183956       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:21:05.184234       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1129 09:21:05.184414       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:21:05.184467       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:21:05.184513       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:21:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:21:05.478264       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:21:05.478296       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:21:05.478307       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:21:05.481366       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:21:05.678673       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:21:05.678703       1 metrics.go:72] Registering metrics
	I1129 09:21:05.678884       1 controller.go:711] "Syncing nftables rules"
	I1129 09:21:15.484696       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:21:15.484747       1 main.go:301] handling current node
	I1129 09:21:25.477875       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:21:25.478099       1 main.go:301] handling current node
	
	
	==> kube-apiserver [442ee1b81cec38adacfe7c257d16cbad914c5e2dcd38dbbbdc3ab578c74701d7] <==
	I1129 09:20:53.825260       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 09:20:53.827155       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:20:53.858026       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:20:53.858399       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 09:20:53.870029       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:20:53.889367       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:20:54.104970       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:20:54.239097       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:20:54.304759       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:20:54.310379       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:20:55.978063       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:20:56.040769       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:20:56.155159       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:20:56.163141       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1129 09:20:56.164582       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:20:56.169995       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:20:56.427962       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:20:57.342740       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:20:57.358859       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:20:57.373259       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:21:02.288215       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:21:02.293681       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:21:02.389633       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:21:02.532424       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1129 09:21:28.832367       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:40778: use of closed network connection
	
	
	==> kube-controller-manager [31108b7632b14c27139254a050f0205a4419db19b5cd47bfa0792d9bca6594b2] <==
	I1129 09:21:01.438970       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 09:21:01.438978       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 09:21:01.439299       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 09:21:01.439435       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-230403"
	I1129 09:21:01.439540       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1129 09:21:01.444191       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-230403" podCIDRs=["10.244.0.0/24"]
	I1129 09:21:01.445105       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:21:01.452345       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1129 09:21:01.459847       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1129 09:21:01.468230       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 09:21:01.475201       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 09:21:01.476399       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1129 09:21:01.476507       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 09:21:01.477693       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:21:01.478269       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 09:21:01.478336       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 09:21:01.480927       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 09:21:01.480878       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1129 09:21:01.481307       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 09:21:01.481412       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1129 09:21:01.481459       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:21:01.481479       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 09:21:01.481495       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1129 09:21:01.483667       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 09:21:16.443958       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [15db02d9b0c388d85652c6f0d2f65bdd40af2dd1368cfc1a53059f0524c5dca3] <==
	I1129 09:21:03.396901       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:21:03.501189       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:21:03.602316       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:21:03.602352       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1129 09:21:03.602476       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:21:03.622161       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:21:03.622217       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:21:03.626421       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:21:03.626917       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:21:03.626943       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:21:03.628555       1 config.go:200] "Starting service config controller"
	I1129 09:21:03.628578       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:21:03.628595       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:21:03.628599       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:21:03.628611       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:21:03.629269       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:21:03.632948       1 config.go:309] "Starting node config controller"
	I1129 09:21:03.632971       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:21:03.632979       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:21:03.729333       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 09:21:03.729422       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:21:03.729349       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ccc47fd0affc1ca4c6b1acdabcf649af9c7dc90d27aeae3fad9532c90b0ad1c6] <==
	I1129 09:20:55.217385       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:20:55.223934       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:20:55.223996       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1129 09:20:55.226902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1129 09:20:55.227330       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 09:20:55.227541       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1129 09:20:55.278924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:20:55.279083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:20:55.279140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 09:20:55.279192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:20:55.279237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:20:55.279284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:20:55.279342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 09:20:55.287173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:20:55.287249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:20:55.287300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:20:55.287349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:20:55.287539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:20:55.287593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:20:55.287633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:20:55.287770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:20:55.287817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:20:55.292211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:20:55.292354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1129 09:20:56.224177       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:20:58 no-preload-230403 kubelet[2096]: E1129 09:20:58.458508    2096 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-no-preload-230403\" already exists" pod="kube-system/kube-scheduler-no-preload-230403"
	Nov 29 09:20:58 no-preload-230403 kubelet[2096]: E1129 09:20:58.465251    2096 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-no-preload-230403\" already exists" pod="kube-system/kube-controller-manager-no-preload-230403"
	Nov 29 09:20:58 no-preload-230403 kubelet[2096]: E1129 09:20:58.469878    2096 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-no-preload-230403\" already exists" pod="kube-system/etcd-no-preload-230403"
	Nov 29 09:20:58 no-preload-230403 kubelet[2096]: E1129 09:20:58.471794    2096 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-no-preload-230403\" already exists" pod="kube-system/kube-apiserver-no-preload-230403"
	Nov 29 09:21:01 no-preload-230403 kubelet[2096]: I1129 09:21:01.452026    2096 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 29 09:21:01 no-preload-230403 kubelet[2096]: I1129 09:21:01.453570    2096 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 09:21:02 no-preload-230403 kubelet[2096]: I1129 09:21:02.652808    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1aa125e0-c584-41e4-8b34-60b0e868cd6a-lib-modules\") pod \"kindnet-9vm4c\" (UID: \"1aa125e0-c584-41e4-8b34-60b0e868cd6a\") " pod="kube-system/kindnet-9vm4c"
	Nov 29 09:21:02 no-preload-230403 kubelet[2096]: I1129 09:21:02.652854    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzxnr\" (UniqueName: \"kubernetes.io/projected/1aa125e0-c584-41e4-8b34-60b0e868cd6a-kube-api-access-jzxnr\") pod \"kindnet-9vm4c\" (UID: \"1aa125e0-c584-41e4-8b34-60b0e868cd6a\") " pod="kube-system/kindnet-9vm4c"
	Nov 29 09:21:02 no-preload-230403 kubelet[2096]: I1129 09:21:02.652880    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49e4de55-0854-4676-bee9-e107a3b5fae6-xtables-lock\") pod \"kube-proxy-dk26g\" (UID: \"49e4de55-0854-4676-bee9-e107a3b5fae6\") " pod="kube-system/kube-proxy-dk26g"
	Nov 29 09:21:02 no-preload-230403 kubelet[2096]: I1129 09:21:02.652898    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1aa125e0-c584-41e4-8b34-60b0e868cd6a-cni-cfg\") pod \"kindnet-9vm4c\" (UID: \"1aa125e0-c584-41e4-8b34-60b0e868cd6a\") " pod="kube-system/kindnet-9vm4c"
	Nov 29 09:21:02 no-preload-230403 kubelet[2096]: I1129 09:21:02.652922    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49e4de55-0854-4676-bee9-e107a3b5fae6-lib-modules\") pod \"kube-proxy-dk26g\" (UID: \"49e4de55-0854-4676-bee9-e107a3b5fae6\") " pod="kube-system/kube-proxy-dk26g"
	Nov 29 09:21:02 no-preload-230403 kubelet[2096]: I1129 09:21:02.652939    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv9pn\" (UniqueName: \"kubernetes.io/projected/49e4de55-0854-4676-bee9-e107a3b5fae6-kube-api-access-dv9pn\") pod \"kube-proxy-dk26g\" (UID: \"49e4de55-0854-4676-bee9-e107a3b5fae6\") " pod="kube-system/kube-proxy-dk26g"
	Nov 29 09:21:02 no-preload-230403 kubelet[2096]: I1129 09:21:02.652958    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/49e4de55-0854-4676-bee9-e107a3b5fae6-kube-proxy\") pod \"kube-proxy-dk26g\" (UID: \"49e4de55-0854-4676-bee9-e107a3b5fae6\") " pod="kube-system/kube-proxy-dk26g"
	Nov 29 09:21:02 no-preload-230403 kubelet[2096]: I1129 09:21:02.652976    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1aa125e0-c584-41e4-8b34-60b0e868cd6a-xtables-lock\") pod \"kindnet-9vm4c\" (UID: \"1aa125e0-c584-41e4-8b34-60b0e868cd6a\") " pod="kube-system/kindnet-9vm4c"
	Nov 29 09:21:02 no-preload-230403 kubelet[2096]: I1129 09:21:02.781508    2096 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 29 09:21:05 no-preload-230403 kubelet[2096]: I1129 09:21:05.482139    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9vm4c" podStartSLOduration=1.831790549 podStartE2EDuration="3.482113477s" podCreationTimestamp="2025-11-29 09:21:02 +0000 UTC" firstStartedPulling="2025-11-29 09:21:03.261360009 +0000 UTC m=+6.095872775" lastFinishedPulling="2025-11-29 09:21:04.911682938 +0000 UTC m=+7.746195703" observedRunningTime="2025-11-29 09:21:05.481939207 +0000 UTC m=+8.316451989" watchObservedRunningTime="2025-11-29 09:21:05.482113477 +0000 UTC m=+8.316626243"
	Nov 29 09:21:05 no-preload-230403 kubelet[2096]: I1129 09:21:05.482271    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dk26g" podStartSLOduration=3.482264272 podStartE2EDuration="3.482264272s" podCreationTimestamp="2025-11-29 09:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:21:03.495630755 +0000 UTC m=+6.330143529" watchObservedRunningTime="2025-11-29 09:21:05.482264272 +0000 UTC m=+8.316777046"
	Nov 29 09:21:15 no-preload-230403 kubelet[2096]: I1129 09:21:15.509988    2096 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 09:21:15 no-preload-230403 kubelet[2096]: I1129 09:21:15.669085    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt9mt\" (UniqueName: \"kubernetes.io/projected/bcd1577c-a3b1-415a-b6fe-ddc56dd52128-kube-api-access-pt9mt\") pod \"storage-provisioner\" (UID: \"bcd1577c-a3b1-415a-b6fe-ddc56dd52128\") " pod="kube-system/storage-provisioner"
	Nov 29 09:21:15 no-preload-230403 kubelet[2096]: I1129 09:21:15.669284    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8966af76-b077-4486-af59-aced26be0a08-config-volume\") pod \"coredns-66bc5c9577-6sxgs\" (UID: \"8966af76-b077-4486-af59-aced26be0a08\") " pod="kube-system/coredns-66bc5c9577-6sxgs"
	Nov 29 09:21:15 no-preload-230403 kubelet[2096]: I1129 09:21:15.669386    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmqb2\" (UniqueName: \"kubernetes.io/projected/8966af76-b077-4486-af59-aced26be0a08-kube-api-access-zmqb2\") pod \"coredns-66bc5c9577-6sxgs\" (UID: \"8966af76-b077-4486-af59-aced26be0a08\") " pod="kube-system/coredns-66bc5c9577-6sxgs"
	Nov 29 09:21:15 no-preload-230403 kubelet[2096]: I1129 09:21:15.669490    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bcd1577c-a3b1-415a-b6fe-ddc56dd52128-tmp\") pod \"storage-provisioner\" (UID: \"bcd1577c-a3b1-415a-b6fe-ddc56dd52128\") " pod="kube-system/storage-provisioner"
	Nov 29 09:21:16 no-preload-230403 kubelet[2096]: I1129 09:21:16.622562    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.622528802 podStartE2EDuration="13.622528802s" podCreationTimestamp="2025-11-29 09:21:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:21:16.622057931 +0000 UTC m=+19.456570697" watchObservedRunningTime="2025-11-29 09:21:16.622528802 +0000 UTC m=+19.457041576"
	Nov 29 09:21:16 no-preload-230403 kubelet[2096]: I1129 09:21:16.623150    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6sxgs" podStartSLOduration=14.623136198 podStartE2EDuration="14.623136198s" podCreationTimestamp="2025-11-29 09:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:21:16.537569737 +0000 UTC m=+19.372082502" watchObservedRunningTime="2025-11-29 09:21:16.623136198 +0000 UTC m=+19.457649038"
	Nov 29 09:21:19 no-preload-230403 kubelet[2096]: I1129 09:21:19.804471    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhqjs\" (UniqueName: \"kubernetes.io/projected/687a18aa-1034-4892-9b86-c0ee20e62df3-kube-api-access-jhqjs\") pod \"busybox\" (UID: \"687a18aa-1034-4892-9b86-c0ee20e62df3\") " pod="default/busybox"
	
	
	==> storage-provisioner [f4a2aa0118a93c01aee62976d9a1b1d28e43e4822b829e507a3bf3a58b4f5243] <==
	I1129 09:21:16.398740       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:21:16.441176       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:21:16.441252       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:21:16.455780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:16.503804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:21:16.537076       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:21:16.538445       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-230403_781b0ec1-ce49-4e99-88af-65c5e6b31216!
	I1129 09:21:16.545246       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"07e55f1f-8797-4ed2-bfa1-48e52251e527", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-230403_781b0ec1-ce49-4e99-88af-65c5e6b31216 became leader
	W1129 09:21:16.590865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:16.610733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:21:16.640272       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-230403_781b0ec1-ce49-4e99-88af-65c5e6b31216!
	W1129 09:21:18.613951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:18.622582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:20.625931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:20.637148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:22.641284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:22.649517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:24.652897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:24.660094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:26.664208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:26.669558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:28.672863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:28.680050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:30.684078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:30.692610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-230403 -n no-preload-230403
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-230403 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (13.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-086358 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [17a6629d-52f0-4e8d-8452-1bf975092ed9] Pending
helpers_test.go:352: "busybox" [17a6629d-52f0-4e8d-8452-1bf975092ed9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [17a6629d-52f0-4e8d-8452-1bf975092ed9] Running
E1129 09:23:47.572770    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004491051s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-086358 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-086358
helpers_test.go:243: (dbg) docker inspect embed-certs-086358:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a18e36fe3f748fd904f7bea90b51373b9c3b8803336a470460bdfb916aa60d62",
	        "Created": "2025-11-29T09:22:24.463403992Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 236796,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:22:24.527022025Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/a18e36fe3f748fd904f7bea90b51373b9c3b8803336a470460bdfb916aa60d62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a18e36fe3f748fd904f7bea90b51373b9c3b8803336a470460bdfb916aa60d62/hostname",
	        "HostsPath": "/var/lib/docker/containers/a18e36fe3f748fd904f7bea90b51373b9c3b8803336a470460bdfb916aa60d62/hosts",
	        "LogPath": "/var/lib/docker/containers/a18e36fe3f748fd904f7bea90b51373b9c3b8803336a470460bdfb916aa60d62/a18e36fe3f748fd904f7bea90b51373b9c3b8803336a470460bdfb916aa60d62-json.log",
	        "Name": "/embed-certs-086358",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-086358:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-086358",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a18e36fe3f748fd904f7bea90b51373b9c3b8803336a470460bdfb916aa60d62",
	                "LowerDir": "/var/lib/docker/overlay2/b2c53c864672ca3c55693f7b314c2b772fb66457a4897c27484040d38f636834-init/diff:/var/lib/docker/overlay2/fc2ab0019906b90b3f033fa414f560878b73f7ff0ebdf77a0b554a40813009d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b2c53c864672ca3c55693f7b314c2b772fb66457a4897c27484040d38f636834/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b2c53c864672ca3c55693f7b314c2b772fb66457a4897c27484040d38f636834/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b2c53c864672ca3c55693f7b314c2b772fb66457a4897c27484040d38f636834/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-086358",
	                "Source": "/var/lib/docker/volumes/embed-certs-086358/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-086358",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-086358",
	                "name.minikube.sigs.k8s.io": "embed-certs-086358",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d110d60ffd1659c6704af757e1f1f8b8b4b72fa53317af18897e73fda1e2da76",
	            "SandboxKey": "/var/run/docker/netns/d110d60ffd16",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-086358": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:13:f6:a7:47:a7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "94288028999fea4312df50c7c050414cb2c3cf91bd0cc6d77dc6860b9f740f8b",
	                    "EndpointID": "7d6cd273a55a9b4323f33f00f04211b82bb0a0d959d56374b2c62d3e8f8bdf34",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-086358",
	                        "a18e36fe3f74"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-086358 -n embed-certs-086358
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-086358 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-086358 logs -n 25: (1.253322079s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p old-k8s-version-071895 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:20 UTC │
	│ start   │ -p cert-expiration-592440 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-592440       │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:20 UTC │
	│ delete  │ -p cert-expiration-592440                                                                                                                                                                                                                           │ cert-expiration-592440       │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:20 UTC │
	│ start   │ -p no-preload-230403 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:21 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-071895 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:20 UTC │
	│ stop    │ -p old-k8s-version-071895 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:21 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-071895 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ start   │ -p old-k8s-version-071895 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ addons  │ enable metrics-server -p no-preload-230403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ stop    │ -p no-preload-230403 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ addons  │ enable dashboard -p no-preload-230403 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ start   │ -p no-preload-230403 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:22 UTC │
	│ image   │ old-k8s-version-071895 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ pause   │ -p old-k8s-version-071895 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ unpause │ -p old-k8s-version-071895 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ delete  │ -p old-k8s-version-071895                                                                                                                                                                                                                           │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ delete  │ -p old-k8s-version-071895                                                                                                                                                                                                                           │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ start   │ -p embed-certs-086358 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-086358           │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:23 UTC │
	│ image   │ no-preload-230403 image list --format=json                                                                                                                                                                                                          │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ pause   │ -p no-preload-230403 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ unpause │ -p no-preload-230403 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ delete  │ -p no-preload-230403                                                                                                                                                                                                                                │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ delete  │ -p no-preload-230403                                                                                                                                                                                                                                │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ delete  │ -p disable-driver-mounts-267340                                                                                                                                                                                                                     │ disable-driver-mounts-267340 │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ start   │ -p default-k8s-diff-port-528769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-528769 │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:22:56
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:22:56.886588  240275 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:22:56.887162  240275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:22:56.887169  240275 out.go:374] Setting ErrFile to fd 2...
	I1129 09:22:56.887174  240275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:22:56.887446  240275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
	I1129 09:22:56.887876  240275 out.go:368] Setting JSON to false
	I1129 09:22:56.888888  240275 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3928,"bootTime":1764404249,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1129 09:22:56.888960  240275 start.go:143] virtualization:  
	I1129 09:22:56.893709  240275 out.go:179] * [default-k8s-diff-port-528769] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 09:22:56.896949  240275 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:22:56.897007  240275 notify.go:221] Checking for updates...
	I1129 09:22:56.903079  240275 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:22:56.905889  240275 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:22:56.908856  240275 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube
	I1129 09:22:56.911783  240275 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 09:22:56.914660  240275 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:22:56.918212  240275 config.go:182] Loaded profile config "embed-certs-086358": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:22:56.918329  240275 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:22:56.953130  240275 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 09:22:56.953251  240275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:22:57.018929  240275 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 09:22:57.006330515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:22:57.019039  240275 docker.go:319] overlay module found
	I1129 09:22:57.022830  240275 out.go:179] * Using the docker driver based on user configuration
	I1129 09:22:57.025807  240275 start.go:309] selected driver: docker
	I1129 09:22:57.025832  240275 start.go:927] validating driver "docker" against <nil>
	I1129 09:22:57.025847  240275 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:22:57.026584  240275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:22:57.087124  240275 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 09:22:57.077423086 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:22:57.087285  240275 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 09:22:57.087525  240275 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:22:57.096107  240275 out.go:179] * Using Docker driver with root privileges
	I1129 09:22:57.099816  240275 cni.go:84] Creating CNI manager for ""
	I1129 09:22:57.099901  240275 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:22:57.099913  240275 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:22:57.100009  240275 start.go:353] cluster config:
	{Name:default-k8s-diff-port-528769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-528769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:22:57.103109  240275 out.go:179] * Starting "default-k8s-diff-port-528769" primary control-plane node in "default-k8s-diff-port-528769" cluster
	I1129 09:22:57.106008  240275 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1129 09:22:57.108840  240275 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:22:57.111783  240275 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:22:57.111830  240275 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1129 09:22:57.111840  240275 cache.go:65] Caching tarball of preloaded images
	I1129 09:22:57.111877  240275 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:22:57.111925  240275 preload.go:238] Found /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1129 09:22:57.111936  240275 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1129 09:22:57.112044  240275 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/config.json ...
	I1129 09:22:57.112061  240275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/config.json: {Name:mk53a836b7bb385e995fdae1587bf5271cb50e0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:22:57.132953  240275 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:22:57.132978  240275 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:22:57.132999  240275 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:22:57.133032  240275 start.go:360] acquireMachinesLock for default-k8s-diff-port-528769: {Name:mk914e0f0d088ade1b42caaad044a8f91bf65d7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:22:57.133152  240275 start.go:364] duration metric: took 98.873µs to acquireMachinesLock for "default-k8s-diff-port-528769"
	I1129 09:22:57.133182  240275 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-528769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-528769 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disabl
eCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:22:57.133250  240275 start.go:125] createHost starting for "" (driver="docker")
	I1129 09:22:56.809506  236407 addons.go:530] duration metric: took 2.476227417s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1129 09:22:58.256069  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	I1129 09:22:57.136560  240275 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 09:22:57.136841  240275 start.go:159] libmachine.API.Create for "default-k8s-diff-port-528769" (driver="docker")
	I1129 09:22:57.136880  240275 client.go:173] LocalClient.Create starting
	I1129 09:22:57.136953  240275 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem
	I1129 09:22:57.136994  240275 main.go:143] libmachine: Decoding PEM data...
	I1129 09:22:57.137021  240275 main.go:143] libmachine: Parsing certificate...
	I1129 09:22:57.137084  240275 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem
	I1129 09:22:57.137106  240275 main.go:143] libmachine: Decoding PEM data...
	I1129 09:22:57.137122  240275 main.go:143] libmachine: Parsing certificate...
	I1129 09:22:57.137512  240275 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-528769 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:22:57.153582  240275 cli_runner.go:211] docker network inspect default-k8s-diff-port-528769 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:22:57.153672  240275 network_create.go:284] running [docker network inspect default-k8s-diff-port-528769] to gather additional debugging logs...
	I1129 09:22:57.153695  240275 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-528769
	W1129 09:22:57.170156  240275 cli_runner.go:211] docker network inspect default-k8s-diff-port-528769 returned with exit code 1
	I1129 09:22:57.170195  240275 network_create.go:287] error running [docker network inspect default-k8s-diff-port-528769]: docker network inspect default-k8s-diff-port-528769: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-528769 not found
	I1129 09:22:57.170209  240275 network_create.go:289] output of [docker network inspect default-k8s-diff-port-528769]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-528769 not found
	
	** /stderr **
	I1129 09:22:57.170338  240275 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:22:57.187501  240275 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8664e809540f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:5a:a5:48:89:fb} reservation:<nil>}
	I1129 09:22:57.187846  240275 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fe5a1fed3d29 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8e:0c:ca:69:14:77} reservation:<nil>}
	I1129 09:22:57.188187  240275 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c3b36bc67c6b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:2d:06:dd:2d:03} reservation:<nil>}
	I1129 09:22:57.188477  240275 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-94288028999f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6a:eb:14:45:1a:19} reservation:<nil>}
	I1129 09:22:57.188941  240275 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e92b0}
	I1129 09:22:57.188970  240275 network_create.go:124] attempt to create docker network default-k8s-diff-port-528769 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1129 09:22:57.189028  240275 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-528769 default-k8s-diff-port-528769
	I1129 09:22:57.259393  240275 network_create.go:108] docker network default-k8s-diff-port-528769 192.168.85.0/24 created
	I1129 09:22:57.259429  240275 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-528769" container
	I1129 09:22:57.259516  240275 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:22:57.281792  240275 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-528769 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-528769 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:22:57.300032  240275 oci.go:103] Successfully created a docker volume default-k8s-diff-port-528769
	I1129 09:22:57.300130  240275 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-528769-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-528769 --entrypoint /usr/bin/test -v default-k8s-diff-port-528769:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:22:57.848603  240275 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-528769
	I1129 09:22:57.848705  240275 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:22:57.848715  240275 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 09:22:57.848784  240275 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-528769:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1129 09:23:00.270798  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	W1129 09:23:02.757801  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	I1129 09:23:02.541498  240275 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-528769:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.692676691s)
	I1129 09:23:02.541533  240275 kic.go:203] duration metric: took 4.692815597s to extract preloaded images to volume ...
	W1129 09:23:02.541685  240275 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1129 09:23:02.541803  240275 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:23:02.597303  240275 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-528769 --name default-k8s-diff-port-528769 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-528769 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-528769 --network default-k8s-diff-port-528769 --ip 192.168.85.2 --volume default-k8s-diff-port-528769:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:23:02.940132  240275 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-528769 --format={{.State.Running}}
	I1129 09:23:02.963590  240275 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-528769 --format={{.State.Status}}
	I1129 09:23:02.987992  240275 cli_runner.go:164] Run: docker exec default-k8s-diff-port-528769 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:23:03.055879  240275 oci.go:144] the created container "default-k8s-diff-port-528769" has a running status.
	I1129 09:23:03.055908  240275 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-2317/.minikube/machines/default-k8s-diff-port-528769/id_rsa...
	I1129 09:23:03.329893  240275 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-2317/.minikube/machines/default-k8s-diff-port-528769/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:23:03.354333  240275 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-528769 --format={{.State.Status}}
	I1129 09:23:03.380414  240275 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:23:03.380440  240275 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-528769 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:23:03.443668  240275 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-528769 --format={{.State.Status}}
	I1129 09:23:03.476147  240275 machine.go:94] provisionDockerMachine start ...
	I1129 09:23:03.476243  240275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-528769
	I1129 09:23:03.503234  240275 main.go:143] libmachine: Using SSH client type: native
	I1129 09:23:03.503577  240275 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1129 09:23:03.503585  240275 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:23:03.504318  240275 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1129 09:23:06.660568  240275 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-528769
	
	I1129 09:23:06.660593  240275 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-528769"
	I1129 09:23:06.660683  240275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-528769
	I1129 09:23:06.678983  240275 main.go:143] libmachine: Using SSH client type: native
	I1129 09:23:06.679306  240275 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1129 09:23:06.679326  240275 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-528769 && echo "default-k8s-diff-port-528769" | sudo tee /etc/hostname
	I1129 09:23:06.842728  240275 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-528769
	
	I1129 09:23:06.842945  240275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-528769
	I1129 09:23:06.861099  240275 main.go:143] libmachine: Using SSH client type: native
	I1129 09:23:06.861436  240275 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1129 09:23:06.861464  240275 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-528769' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-528769/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-528769' | sudo tee -a /etc/hosts; 
				fi
			fi
	W1129 09:23:05.256346  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	W1129 09:23:07.257251  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	I1129 09:23:07.014053  240275 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:23:07.014098  240275 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-2317/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-2317/.minikube}
	I1129 09:23:07.014142  240275 ubuntu.go:190] setting up certificates
	I1129 09:23:07.014162  240275 provision.go:84] configureAuth start
	I1129 09:23:07.014265  240275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-528769
	I1129 09:23:07.035244  240275 provision.go:143] copyHostCerts
	I1129 09:23:07.035313  240275 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem, removing ...
	I1129 09:23:07.035322  240275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem
	I1129 09:23:07.035403  240275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem (1123 bytes)
	I1129 09:23:07.035500  240275 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem, removing ...
	I1129 09:23:07.035513  240275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem
	I1129 09:23:07.035540  240275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem (1679 bytes)
	I1129 09:23:07.035591  240275 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem, removing ...
	I1129 09:23:07.035595  240275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem
	I1129 09:23:07.035622  240275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem (1082 bytes)
	I1129 09:23:07.035701  240275 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-528769 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-528769 localhost minikube]
	I1129 09:23:07.221271  240275 provision.go:177] copyRemoteCerts
	I1129 09:23:07.221339  240275 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:23:07.221392  240275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-528769
	I1129 09:23:07.239928  240275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/default-k8s-diff-port-528769/id_rsa Username:docker}
	I1129 09:23:07.349722  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:23:07.368126  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 09:23:07.387319  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1129 09:23:07.407076  240275 provision.go:87] duration metric: took 392.871055ms to configureAuth
	I1129 09:23:07.407116  240275 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:23:07.407476  240275 config.go:182] Loaded profile config "default-k8s-diff-port-528769": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:23:07.407499  240275 machine.go:97] duration metric: took 3.931328976s to provisionDockerMachine
	I1129 09:23:07.407507  240275 client.go:176] duration metric: took 10.270617706s to LocalClient.Create
	I1129 09:23:07.407533  240275 start.go:167] duration metric: took 10.270692381s to libmachine.API.Create "default-k8s-diff-port-528769"
	I1129 09:23:07.407541  240275 start.go:293] postStartSetup for "default-k8s-diff-port-528769" (driver="docker")
	I1129 09:23:07.407563  240275 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:23:07.407643  240275 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:23:07.407690  240275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-528769
	I1129 09:23:07.427475  240275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/default-k8s-diff-port-528769/id_rsa Username:docker}
	I1129 09:23:07.542270  240275 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:23:07.545896  240275 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:23:07.545927  240275 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:23:07.545941  240275 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-2317/.minikube/addons for local assets ...
	I1129 09:23:07.546005  240275 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-2317/.minikube/files for local assets ...
	I1129 09:23:07.546089  240275 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem -> 41372.pem in /etc/ssl/certs
	I1129 09:23:07.546198  240275 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:23:07.555192  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem --> /etc/ssl/certs/41372.pem (1708 bytes)
	I1129 09:23:07.574057  240275 start.go:296] duration metric: took 166.500841ms for postStartSetup
	I1129 09:23:07.574488  240275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-528769
	I1129 09:23:07.591751  240275 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/config.json ...
	I1129 09:23:07.592046  240275 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:23:07.592094  240275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-528769
	I1129 09:23:07.609542  240275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/default-k8s-diff-port-528769/id_rsa Username:docker}
	I1129 09:23:07.714196  240275 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:23:07.719192  240275 start.go:128] duration metric: took 10.585927311s to createHost
	I1129 09:23:07.719219  240275 start.go:83] releasing machines lock for "default-k8s-diff-port-528769", held for 10.586053572s
	I1129 09:23:07.719301  240275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-528769
	I1129 09:23:07.742045  240275 ssh_runner.go:195] Run: cat /version.json
	I1129 09:23:07.742093  240275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-528769
	I1129 09:23:07.742324  240275 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:23:07.742370  240275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-528769
	I1129 09:23:07.777333  240275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/default-k8s-diff-port-528769/id_rsa Username:docker}
	I1129 09:23:07.790431  240275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/default-k8s-diff-port-528769/id_rsa Username:docker}
	I1129 09:23:07.888550  240275 ssh_runner.go:195] Run: systemctl --version
	I1129 09:23:07.976986  240275 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:23:07.988874  240275 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:23:07.988966  240275 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:23:08.028891  240275 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1129 09:23:08.028920  240275 start.go:496] detecting cgroup driver to use...
	I1129 09:23:08.028967  240275 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 09:23:08.029037  240275 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1129 09:23:08.047017  240275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1129 09:23:08.062615  240275 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:23:08.062682  240275 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:23:08.081653  240275 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:23:08.101407  240275 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:23:08.228535  240275 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:23:08.369349  240275 docker.go:234] disabling docker service ...
	I1129 09:23:08.369467  240275 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:23:08.392300  240275 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:23:08.407379  240275 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:23:08.542008  240275 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:23:08.675182  240275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:23:08.691017  240275 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:23:08.706677  240275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1129 09:23:08.716649  240275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1129 09:23:08.725875  240275 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1129 09:23:08.725952  240275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1129 09:23:08.736397  240275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:23:08.746920  240275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1129 09:23:08.760702  240275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:23:08.770566  240275 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:23:08.780565  240275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1129 09:23:08.790252  240275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1129 09:23:08.799658  240275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1129 09:23:08.809215  240275 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:23:08.818703  240275 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:23:08.827039  240275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:23:08.951618  240275 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1129 09:23:09.108759  240275 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1129 09:23:09.108881  240275 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1129 09:23:09.112994  240275 start.go:564] Will wait 60s for crictl version
	I1129 09:23:09.113108  240275 ssh_runner.go:195] Run: which crictl
	I1129 09:23:09.116782  240275 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:23:09.143455  240275 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1129 09:23:09.143588  240275 ssh_runner.go:195] Run: containerd --version
	I1129 09:23:09.164571  240275 ssh_runner.go:195] Run: containerd --version
	I1129 09:23:09.194615  240275 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1129 09:23:09.197541  240275 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-528769 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:23:09.213699  240275 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1129 09:23:09.217761  240275 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:23:09.232515  240275 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-528769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-528769 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:23:09.232673  240275 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:23:09.232748  240275 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:23:09.261041  240275 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:23:09.261065  240275 containerd.go:534] Images already preloaded, skipping extraction
	I1129 09:23:09.261124  240275 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:23:09.286376  240275 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:23:09.286411  240275 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:23:09.286420  240275 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1129 09:23:09.286554  240275 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-528769 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-528769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:23:09.286631  240275 ssh_runner.go:195] Run: sudo crictl info
	I1129 09:23:09.311567  240275 cni.go:84] Creating CNI manager for ""
	I1129 09:23:09.311593  240275 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:23:09.311606  240275 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:23:09.311631  240275 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-528769 NodeName:default-k8s-diff-port-528769 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:23:09.311750  240275 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-528769"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:23:09.311817  240275 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:23:09.319701  240275 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:23:09.319770  240275 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:23:09.328133  240275 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1129 09:23:09.352587  240275 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:23:09.365839  240275 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2241 bytes)
	I1129 09:23:09.379625  240275 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:23:09.383306  240275 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:23:09.392827  240275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:23:09.521000  240275 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:23:09.539047  240275 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769 for IP: 192.168.85.2
	I1129 09:23:09.539067  240275 certs.go:195] generating shared ca certs ...
	I1129 09:23:09.539083  240275 certs.go:227] acquiring lock for ca certs: {Name:mke655c14945a8520f2f9de36531df923afb2bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:23:09.539223  240275 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-2317/.minikube/ca.key
	I1129 09:23:09.539275  240275 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.key
	I1129 09:23:09.539285  240275 certs.go:257] generating profile certs ...
	I1129 09:23:09.539339  240275 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/client.key
	I1129 09:23:09.539356  240275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/client.crt with IP's: []
	I1129 09:23:09.806954  240275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/client.crt ...
	I1129 09:23:09.806989  240275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/client.crt: {Name:mkccb154cd5bbc2795906704a4034218f3573327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:23:09.807861  240275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/client.key ...
	I1129 09:23:09.807880  240275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/client.key: {Name:mkeca98bac5795c13ad059b9a36eb31374878d65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:23:09.807979  240275 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.key.10155db3
	I1129 09:23:09.808005  240275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.crt.10155db3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1129 09:23:09.942874  240275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.crt.10155db3 ...
	I1129 09:23:09.942910  240275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.crt.10155db3: {Name:mka6229a7322599f23ec94877297394ab51f4eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:23:09.943135  240275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.key.10155db3 ...
	I1129 09:23:09.943154  240275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.key.10155db3: {Name:mk170c87ed21393949dbb46aaabd8dae18f2b31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:23:09.943250  240275 certs.go:382] copying /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.crt.10155db3 -> /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.crt
	I1129 09:23:09.943331  240275 certs.go:386] copying /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.key.10155db3 -> /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.key
	I1129 09:23:09.943407  240275 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/proxy-client.key
	I1129 09:23:09.943426  240275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/proxy-client.crt with IP's: []
	I1129 09:23:10.054844  240275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/proxy-client.crt ...
	I1129 09:23:10.054878  240275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/proxy-client.crt: {Name:mk0ce369c67aa0068714a185899730953f48c746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:23:10.055072  240275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/proxy-client.key ...
	I1129 09:23:10.055087  240275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/proxy-client.key: {Name:mkbd581549b9876eab92384291dd3793dd4a3e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:23:10.055279  240275 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137.pem (1338 bytes)
	W1129 09:23:10.055327  240275 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137_empty.pem, impossibly tiny 0 bytes
	I1129 09:23:10.055341  240275 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:23:10.055369  240275 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem (1082 bytes)
	I1129 09:23:10.055401  240275 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:23:10.055429  240275 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem (1679 bytes)
	I1129 09:23:10.055482  240275 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem (1708 bytes)
	I1129 09:23:10.056046  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:23:10.077464  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1129 09:23:10.109432  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:23:10.133111  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:23:10.152942  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1129 09:23:10.172098  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1129 09:23:10.192097  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:23:10.212100  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:23:10.231859  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:23:10.252208  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137.pem --> /usr/share/ca-certificates/4137.pem (1338 bytes)
	I1129 09:23:10.279963  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem --> /usr/share/ca-certificates/41372.pem (1708 bytes)
	I1129 09:23:10.299613  240275 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:23:10.314348  240275 ssh_runner.go:195] Run: openssl version
	I1129 09:23:10.321487  240275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:23:10.330486  240275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:23:10.334626  240275 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:23:10.334735  240275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:23:10.377013  240275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:23:10.386451  240275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4137.pem && ln -fs /usr/share/ca-certificates/4137.pem /etc/ssl/certs/4137.pem"
	I1129 09:23:10.395262  240275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4137.pem
	I1129 09:23:10.399226  240275 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:36 /usr/share/ca-certificates/4137.pem
	I1129 09:23:10.399331  240275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4137.pem
	I1129 09:23:10.442777  240275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4137.pem /etc/ssl/certs/51391683.0"
	I1129 09:23:10.451672  240275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41372.pem && ln -fs /usr/share/ca-certificates/41372.pem /etc/ssl/certs/41372.pem"
	I1129 09:23:10.460746  240275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41372.pem
	I1129 09:23:10.464938  240275 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:36 /usr/share/ca-certificates/41372.pem
	I1129 09:23:10.465007  240275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41372.pem
	I1129 09:23:10.518834  240275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41372.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:23:10.529634  240275 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:23:10.534206  240275 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:23:10.534277  240275 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-528769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-528769 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:23:10.534356  240275 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1129 09:23:10.534431  240275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:23:10.562679  240275 cri.go:89] found id: ""
	I1129 09:23:10.562794  240275 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:23:10.570957  240275 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:23:10.579051  240275 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:23:10.579130  240275 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:23:10.587258  240275 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:23:10.587292  240275 kubeadm.go:158] found existing configuration files:
	
	I1129 09:23:10.587375  240275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1129 09:23:10.595663  240275 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:23:10.595752  240275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:23:10.603803  240275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1129 09:23:10.613513  240275 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:23:10.613619  240275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:23:10.621331  240275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1129 09:23:10.629472  240275 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:23:10.629586  240275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:23:10.637322  240275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1129 09:23:10.645506  240275 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:23:10.645597  240275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:23:10.653415  240275 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:23:10.700349  240275 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 09:23:10.700446  240275 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:23:10.727649  240275 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:23:10.727783  240275 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1129 09:23:10.727849  240275 kubeadm.go:319] OS: Linux
	I1129 09:23:10.727914  240275 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:23:10.727990  240275 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1129 09:23:10.728066  240275 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:23:10.728141  240275 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:23:10.728217  240275 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:23:10.728287  240275 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:23:10.728359  240275 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:23:10.728438  240275 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:23:10.728504  240275 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1129 09:23:10.811028  240275 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:23:10.811195  240275 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:23:10.811323  240275 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 09:23:10.818596  240275 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:23:10.824908  240275 out.go:252]   - Generating certificates and keys ...
	I1129 09:23:10.825103  240275 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:23:10.825186  240275 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:23:11.106139  240275 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	W1129 09:23:09.757543  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	W1129 09:23:12.256885  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	I1129 09:23:12.962060  240275 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:23:13.287365  240275 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:23:13.797605  240275 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:23:13.885395  240275 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:23:13.885772  240275 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-528769 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1129 09:23:14.539005  240275 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:23:14.539308  240275 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-528769 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1129 09:23:15.510996  240275 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:23:15.617440  240275 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:23:16.157752  240275 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:23:16.158063  240275 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	W1129 09:23:14.257088  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	W1129 09:23:16.257836  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	I1129 09:23:16.925956  240275 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:23:17.371600  240275 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 09:23:18.014048  240275 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:23:18.493342  240275 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:23:18.712151  240275 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:23:18.713131  240275 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:23:18.717111  240275 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:23:18.720404  240275 out.go:252]   - Booting up control plane ...
	I1129 09:23:18.720514  240275 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:23:18.720595  240275 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:23:18.721362  240275 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:23:18.739183  240275 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:23:18.739299  240275 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 09:23:18.746752  240275 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 09:23:18.747091  240275 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:23:18.749942  240275 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:23:18.891551  240275 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 09:23:18.891671  240275 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 09:23:19.893060  240275 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001781342s
	I1129 09:23:19.896503  240275 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 09:23:19.896597  240275 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1129 09:23:19.896955  240275 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 09:23:19.897046  240275 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1129 09:23:18.756574  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	W1129 09:23:20.758518  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	W1129 09:23:23.256294  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	I1129 09:23:22.241412  240275 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.344329117s
	I1129 09:23:24.857356  240275 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.960792639s
	I1129 09:23:26.900001  240275 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.003223708s
	I1129 09:23:26.921571  240275 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 09:23:26.945311  240275 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 09:23:26.965618  240275 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 09:23:26.965844  240275 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-528769 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 09:23:26.980087  240275 kubeadm.go:319] [bootstrap-token] Using token: zdgyh9.0wpg2ibusyd3huv3
	I1129 09:23:26.983038  240275 out.go:252]   - Configuring RBAC rules ...
	I1129 09:23:26.983168  240275 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 09:23:26.988070  240275 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 09:23:27.009422  240275 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 09:23:27.024949  240275 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 09:23:27.032483  240275 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 09:23:27.038193  240275 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 09:23:27.309607  240275 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 09:23:27.735557  240275 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 09:23:28.309702  240275 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 09:23:28.310953  240275 kubeadm.go:319] 
	I1129 09:23:28.311024  240275 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 09:23:28.311030  240275 kubeadm.go:319] 
	I1129 09:23:28.311102  240275 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 09:23:28.311106  240275 kubeadm.go:319] 
	I1129 09:23:28.311129  240275 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 09:23:28.311185  240275 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 09:23:28.311232  240275 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 09:23:28.311235  240275 kubeadm.go:319] 
	I1129 09:23:28.311286  240275 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 09:23:28.311289  240275 kubeadm.go:319] 
	I1129 09:23:28.311334  240275 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 09:23:28.311337  240275 kubeadm.go:319] 
	I1129 09:23:28.311386  240275 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 09:23:28.311456  240275 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 09:23:28.311523  240275 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 09:23:28.311527  240275 kubeadm.go:319] 
	I1129 09:23:28.311607  240275 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 09:23:28.311689  240275 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 09:23:28.311694  240275 kubeadm.go:319] 
	I1129 09:23:28.311773  240275 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token zdgyh9.0wpg2ibusyd3huv3 \
	I1129 09:23:28.311870  240275 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:de8e56270375befae923bc70a44a39424a62093a1080181ff9ea4b4afb1027a6 \
	I1129 09:23:28.311889  240275 kubeadm.go:319] 	--control-plane 
	I1129 09:23:28.311897  240275 kubeadm.go:319] 
	I1129 09:23:28.311977  240275 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 09:23:28.311981  240275 kubeadm.go:319] 
	I1129 09:23:28.312064  240275 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token zdgyh9.0wpg2ibusyd3huv3 \
	I1129 09:23:28.312161  240275 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:de8e56270375befae923bc70a44a39424a62093a1080181ff9ea4b4afb1027a6 
	I1129 09:23:28.316589  240275 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1129 09:23:28.316857  240275 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1129 09:23:28.316970  240275 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 09:23:28.316995  240275 cni.go:84] Creating CNI manager for ""
	I1129 09:23:28.317008  240275 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:23:28.320302  240275 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1129 09:23:25.257411  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	W1129 09:23:27.756892  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	I1129 09:23:28.323156  240275 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 09:23:28.327426  240275 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 09:23:28.327455  240275 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 09:23:28.341412  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 09:23:28.662860  240275 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:23:28.662997  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:23:28.663084  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-528769 minikube.k8s.io/updated_at=2025_11_29T09_23_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=default-k8s-diff-port-528769 minikube.k8s.io/primary=true
	I1129 09:23:28.861054  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:23:28.861131  240275 ops.go:34] apiserver oom_adj: -16
	I1129 09:23:29.361242  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:23:29.861859  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:23:30.361053  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:23:30.861089  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:23:31.361566  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:23:31.861793  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:23:32.361140  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:23:32.861299  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:23:33.115156  240275 kubeadm.go:1114] duration metric: took 4.45220329s to wait for elevateKubeSystemPrivileges
	I1129 09:23:33.115188  240275 kubeadm.go:403] duration metric: took 22.580917755s to StartCluster
	I1129 09:23:33.115206  240275 settings.go:142] acquiring lock: {Name:mk44917d1324740eeda65bf3aa312ad1561d3ed4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:23:33.115270  240275 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:23:33.117074  240275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/kubeconfig: {Name:mk3c09eb9158ba85342a695b6ac4b1a5f69e1b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:23:33.117924  240275 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 09:23:33.117947  240275 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:23:33.118012  240275 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-528769"
	I1129 09:23:33.117916  240275 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:23:33.118026  240275 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-528769"
	I1129 09:23:33.118050  240275 host.go:66] Checking if "default-k8s-diff-port-528769" exists ...
	I1129 09:23:33.118715  240275 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-528769 --format={{.State.Status}}
	I1129 09:23:33.119150  240275 config.go:182] Loaded profile config "default-k8s-diff-port-528769": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:23:33.119221  240275 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-528769"
	I1129 09:23:33.119240  240275 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-528769"
	I1129 09:23:33.119495  240275 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-528769 --format={{.State.Status}}
	I1129 09:23:33.122336  240275 out.go:179] * Verifying Kubernetes components...
	I1129 09:23:33.128673  240275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:23:33.161726  240275 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1129 09:23:29.757288  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	W1129 09:23:32.256523  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	I1129 09:23:33.164798  240275 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:23:33.164823  240275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:23:33.164885  240275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-528769
	I1129 09:23:33.166088  240275 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-528769"
	I1129 09:23:33.166122  240275 host.go:66] Checking if "default-k8s-diff-port-528769" exists ...
	I1129 09:23:33.175912  240275 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-528769 --format={{.State.Status}}
	I1129 09:23:33.199281  240275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/default-k8s-diff-port-528769/id_rsa Username:docker}
	I1129 09:23:33.222515  240275 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:23:33.222541  240275 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:23:33.222609  240275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-528769
	I1129 09:23:33.258168  240275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/default-k8s-diff-port-528769/id_rsa Username:docker}
	I1129 09:23:33.508766  240275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:23:33.629665  240275 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:23:33.629919  240275 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 09:23:33.631148  240275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:23:34.323306  240275 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1129 09:23:34.326062  240275 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-528769" to be "Ready" ...
	I1129 09:23:34.369831  240275 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1129 09:23:34.372744  240275 addons.go:530] duration metric: took 1.254792097s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 09:23:34.827262  240275 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-528769" context rescaled to 1 replicas
	W1129 09:23:36.336846  240275 node_ready.go:57] node "default-k8s-diff-port-528769" has "Ready":"False" status (will retry)
	W1129 09:23:34.256791  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	I1129 09:23:36.757719  236407 node_ready.go:49] node "embed-certs-086358" is "Ready"
	I1129 09:23:36.757751  236407 node_ready.go:38] duration metric: took 40.504683915s for node "embed-certs-086358" to be "Ready" ...
	I1129 09:23:36.757767  236407 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:23:36.757839  236407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:23:36.772152  236407 api_server.go:72] duration metric: took 42.439130579s to wait for apiserver process to appear ...
	I1129 09:23:36.772179  236407 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:23:36.772198  236407 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:23:36.781673  236407 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 09:23:36.782789  236407 api_server.go:141] control plane version: v1.34.1
	I1129 09:23:36.782819  236407 api_server.go:131] duration metric: took 10.632744ms to wait for apiserver health ...
	I1129 09:23:36.782828  236407 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:23:36.786672  236407 system_pods.go:59] 8 kube-system pods found
	I1129 09:23:36.786708  236407 system_pods.go:61] "coredns-66bc5c9577-2fhrs" [224b9d8a-65f2-44ed-b5b3-9b8f39ac6854] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:23:36.786751  236407 system_pods.go:61] "etcd-embed-certs-086358" [674a8f81-94b4-41ce-94c2-90cb52b67601] Running
	I1129 09:23:36.786759  236407 system_pods.go:61] "kindnet-2x7dg" [4945072e-8049-437d-8593-8f1de5316222] Running
	I1129 09:23:36.786763  236407 system_pods.go:61] "kube-apiserver-embed-certs-086358" [68dfb4c7-7463-4946-bbef-d3002539fd2d] Running
	I1129 09:23:36.786767  236407 system_pods.go:61] "kube-controller-manager-embed-certs-086358" [c5085977-e0b5-48d7-8a13-40e11f6c63e0] Running
	I1129 09:23:36.786788  236407 system_pods.go:61] "kube-proxy-2qzkl" [2def38f6-3e34-4e81-a66a-59f10b8fc3a0] Running
	I1129 09:23:36.786799  236407 system_pods.go:61] "kube-scheduler-embed-certs-086358" [f2afa9a4-1299-470a-a815-c0cf65b82307] Running
	I1129 09:23:36.786804  236407 system_pods.go:61] "storage-provisioner" [e08be393-d772-4606-bb5b-b754bee79505] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:23:36.786821  236407 system_pods.go:74] duration metric: took 3.977435ms to wait for pod list to return data ...
	I1129 09:23:36.786842  236407 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:23:36.796913  236407 default_sa.go:45] found service account: "default"
	I1129 09:23:36.796942  236407 default_sa.go:55] duration metric: took 10.093107ms for default service account to be created ...
	I1129 09:23:36.796953  236407 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:23:36.800120  236407 system_pods.go:86] 8 kube-system pods found
	I1129 09:23:36.800156  236407 system_pods.go:89] "coredns-66bc5c9577-2fhrs" [224b9d8a-65f2-44ed-b5b3-9b8f39ac6854] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:23:36.800163  236407 system_pods.go:89] "etcd-embed-certs-086358" [674a8f81-94b4-41ce-94c2-90cb52b67601] Running
	I1129 09:23:36.800169  236407 system_pods.go:89] "kindnet-2x7dg" [4945072e-8049-437d-8593-8f1de5316222] Running
	I1129 09:23:36.800176  236407 system_pods.go:89] "kube-apiserver-embed-certs-086358" [68dfb4c7-7463-4946-bbef-d3002539fd2d] Running
	I1129 09:23:36.800181  236407 system_pods.go:89] "kube-controller-manager-embed-certs-086358" [c5085977-e0b5-48d7-8a13-40e11f6c63e0] Running
	I1129 09:23:36.800185  236407 system_pods.go:89] "kube-proxy-2qzkl" [2def38f6-3e34-4e81-a66a-59f10b8fc3a0] Running
	I1129 09:23:36.800189  236407 system_pods.go:89] "kube-scheduler-embed-certs-086358" [f2afa9a4-1299-470a-a815-c0cf65b82307] Running
	I1129 09:23:36.800195  236407 system_pods.go:89] "storage-provisioner" [e08be393-d772-4606-bb5b-b754bee79505] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:23:36.800218  236407 retry.go:31] will retry after 244.931691ms: missing components: kube-dns
	I1129 09:23:37.049204  236407 system_pods.go:86] 8 kube-system pods found
	I1129 09:23:37.049240  236407 system_pods.go:89] "coredns-66bc5c9577-2fhrs" [224b9d8a-65f2-44ed-b5b3-9b8f39ac6854] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:23:37.049249  236407 system_pods.go:89] "etcd-embed-certs-086358" [674a8f81-94b4-41ce-94c2-90cb52b67601] Running
	I1129 09:23:37.049256  236407 system_pods.go:89] "kindnet-2x7dg" [4945072e-8049-437d-8593-8f1de5316222] Running
	I1129 09:23:37.049261  236407 system_pods.go:89] "kube-apiserver-embed-certs-086358" [68dfb4c7-7463-4946-bbef-d3002539fd2d] Running
	I1129 09:23:37.049268  236407 system_pods.go:89] "kube-controller-manager-embed-certs-086358" [c5085977-e0b5-48d7-8a13-40e11f6c63e0] Running
	I1129 09:23:37.049271  236407 system_pods.go:89] "kube-proxy-2qzkl" [2def38f6-3e34-4e81-a66a-59f10b8fc3a0] Running
	I1129 09:23:37.049275  236407 system_pods.go:89] "kube-scheduler-embed-certs-086358" [f2afa9a4-1299-470a-a815-c0cf65b82307] Running
	I1129 09:23:37.049281  236407 system_pods.go:89] "storage-provisioner" [e08be393-d772-4606-bb5b-b754bee79505] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:23:37.049299  236407 retry.go:31] will retry after 351.544334ms: missing components: kube-dns
	I1129 09:23:37.406631  236407 system_pods.go:86] 8 kube-system pods found
	I1129 09:23:37.406668  236407 system_pods.go:89] "coredns-66bc5c9577-2fhrs" [224b9d8a-65f2-44ed-b5b3-9b8f39ac6854] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:23:37.406676  236407 system_pods.go:89] "etcd-embed-certs-086358" [674a8f81-94b4-41ce-94c2-90cb52b67601] Running
	I1129 09:23:37.406684  236407 system_pods.go:89] "kindnet-2x7dg" [4945072e-8049-437d-8593-8f1de5316222] Running
	I1129 09:23:37.406688  236407 system_pods.go:89] "kube-apiserver-embed-certs-086358" [68dfb4c7-7463-4946-bbef-d3002539fd2d] Running
	I1129 09:23:37.406693  236407 system_pods.go:89] "kube-controller-manager-embed-certs-086358" [c5085977-e0b5-48d7-8a13-40e11f6c63e0] Running
	I1129 09:23:37.406697  236407 system_pods.go:89] "kube-proxy-2qzkl" [2def38f6-3e34-4e81-a66a-59f10b8fc3a0] Running
	I1129 09:23:37.406701  236407 system_pods.go:89] "kube-scheduler-embed-certs-086358" [f2afa9a4-1299-470a-a815-c0cf65b82307] Running
	I1129 09:23:37.406708  236407 system_pods.go:89] "storage-provisioner" [e08be393-d772-4606-bb5b-b754bee79505] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:23:37.406727  236407 retry.go:31] will retry after 466.917085ms: missing components: kube-dns
	I1129 09:23:37.878651  236407 system_pods.go:86] 8 kube-system pods found
	I1129 09:23:37.878718  236407 system_pods.go:89] "coredns-66bc5c9577-2fhrs" [224b9d8a-65f2-44ed-b5b3-9b8f39ac6854] Running
	I1129 09:23:37.878732  236407 system_pods.go:89] "etcd-embed-certs-086358" [674a8f81-94b4-41ce-94c2-90cb52b67601] Running
	I1129 09:23:37.878744  236407 system_pods.go:89] "kindnet-2x7dg" [4945072e-8049-437d-8593-8f1de5316222] Running
	I1129 09:23:37.878756  236407 system_pods.go:89] "kube-apiserver-embed-certs-086358" [68dfb4c7-7463-4946-bbef-d3002539fd2d] Running
	I1129 09:23:37.878766  236407 system_pods.go:89] "kube-controller-manager-embed-certs-086358" [c5085977-e0b5-48d7-8a13-40e11f6c63e0] Running
	I1129 09:23:37.878771  236407 system_pods.go:89] "kube-proxy-2qzkl" [2def38f6-3e34-4e81-a66a-59f10b8fc3a0] Running
	I1129 09:23:37.878788  236407 system_pods.go:89] "kube-scheduler-embed-certs-086358" [f2afa9a4-1299-470a-a815-c0cf65b82307] Running
	I1129 09:23:37.878797  236407 system_pods.go:89] "storage-provisioner" [e08be393-d772-4606-bb5b-b754bee79505] Running
	I1129 09:23:37.878806  236407 system_pods.go:126] duration metric: took 1.081846492s to wait for k8s-apps to be running ...
	I1129 09:23:37.878824  236407 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:23:37.878902  236407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:23:37.897082  236407 system_svc.go:56] duration metric: took 18.248356ms WaitForService to wait for kubelet
	I1129 09:23:37.897114  236407 kubeadm.go:587] duration metric: took 43.564102014s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:23:37.897147  236407 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:23:37.900752  236407 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 09:23:37.900800  236407 node_conditions.go:123] node cpu capacity is 2
	I1129 09:23:37.900815  236407 node_conditions.go:105] duration metric: took 3.661388ms to run NodePressure ...
	I1129 09:23:37.900828  236407 start.go:242] waiting for startup goroutines ...
	I1129 09:23:37.900835  236407 start.go:247] waiting for cluster config update ...
	I1129 09:23:37.900847  236407 start.go:256] writing updated cluster config ...
	I1129 09:23:37.901146  236407 ssh_runner.go:195] Run: rm -f paused
	I1129 09:23:37.905070  236407 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:23:37.909403  236407 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2fhrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:37.914613  236407 pod_ready.go:94] pod "coredns-66bc5c9577-2fhrs" is "Ready"
	I1129 09:23:37.914644  236407 pod_ready.go:86] duration metric: took 5.213323ms for pod "coredns-66bc5c9577-2fhrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:37.917366  236407 pod_ready.go:83] waiting for pod "etcd-embed-certs-086358" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:37.922734  236407 pod_ready.go:94] pod "etcd-embed-certs-086358" is "Ready"
	I1129 09:23:37.922764  236407 pod_ready.go:86] duration metric: took 5.371363ms for pod "etcd-embed-certs-086358" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:37.925356  236407 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-086358" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:37.935376  236407 pod_ready.go:94] pod "kube-apiserver-embed-certs-086358" is "Ready"
	I1129 09:23:37.935405  236407 pod_ready.go:86] duration metric: took 10.024019ms for pod "kube-apiserver-embed-certs-086358" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:37.938034  236407 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-086358" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:38.310011  236407 pod_ready.go:94] pod "kube-controller-manager-embed-certs-086358" is "Ready"
	I1129 09:23:38.310038  236407 pod_ready.go:86] duration metric: took 371.976112ms for pod "kube-controller-manager-embed-certs-086358" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:38.509764  236407 pod_ready.go:83] waiting for pod "kube-proxy-2qzkl" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:38.909143  236407 pod_ready.go:94] pod "kube-proxy-2qzkl" is "Ready"
	I1129 09:23:38.909223  236407 pod_ready.go:86] duration metric: took 399.433237ms for pod "kube-proxy-2qzkl" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:39.110447  236407 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-086358" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:39.510364  236407 pod_ready.go:94] pod "kube-scheduler-embed-certs-086358" is "Ready"
	I1129 09:23:39.510395  236407 pod_ready.go:86] duration metric: took 399.922019ms for pod "kube-scheduler-embed-certs-086358" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:39.510408  236407 pod_ready.go:40] duration metric: took 1.605298686s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:23:39.579367  236407 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 09:23:39.584594  236407 out.go:179] * Done! kubectl is now configured to use "embed-certs-086358" cluster and "default" namespace by default
	W1129 09:23:38.829404  240275 node_ready.go:57] node "default-k8s-diff-port-528769" has "Ready":"False" status (will retry)
	W1129 09:23:40.829873  240275 node_ready.go:57] node "default-k8s-diff-port-528769" has "Ready":"False" status (will retry)
	W1129 09:23:43.330432  240275 node_ready.go:57] node "default-k8s-diff-port-528769" has "Ready":"False" status (will retry)
	W1129 09:23:45.829486  240275 node_ready.go:57] node "default-k8s-diff-port-528769" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	79b2b865b7fe8       1611cd07b61d5       6 seconds ago        Running             busybox                   0                   efd8e18d67cf0       busybox                                      default
	142f1b95a243c       138784d87c9c5       11 seconds ago       Running             coredns                   0                   5369c6303bf8e       coredns-66bc5c9577-2fhrs                     kube-system
	71da9bf637f99       ba04bb24b9575       11 seconds ago       Running             storage-provisioner       0                   0efca620800d2       storage-provisioner                          kube-system
	463144a8348fe       b1a8c6f707935       53 seconds ago       Running             kindnet-cni               0                   aab4417da4c79       kindnet-2x7dg                                kube-system
	0221d25cfd4dd       05baa95f5142d       53 seconds ago       Running             kube-proxy                0                   2b3976be500f5       kube-proxy-2qzkl                             kube-system
	c0577342962bc       a1894772a478e       About a minute ago   Running             etcd                      0                   ce0ecdda9a07e       etcd-embed-certs-086358                      kube-system
	63d03d07ac0a1       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   8554cc3301ab4       kube-scheduler-embed-certs-086358            kube-system
	9a782a50e3036       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   1a4b8eacf5060       kube-apiserver-embed-certs-086358            kube-system
	593a51223ee9a       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   29fb789d52ff0       kube-controller-manager-embed-certs-086358   kube-system
	
	
	==> containerd <==
	Nov 29 09:23:37 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:37.166574472Z" level=info msg="CreateContainer within sandbox \"0efca620800d2c5f1427a9202f9a9b882e4fdd5e4a5d4926bc2000b1db598beb\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"71da9bf637f997fd41268b358e43d421edcd0b0f351938a5905ffb2acc33b21f\""
	Nov 29 09:23:37 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:37.167231231Z" level=info msg="StartContainer for \"71da9bf637f997fd41268b358e43d421edcd0b0f351938a5905ffb2acc33b21f\""
	Nov 29 09:23:37 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:37.168508744Z" level=info msg="connecting to shim 71da9bf637f997fd41268b358e43d421edcd0b0f351938a5905ffb2acc33b21f" address="unix:///run/containerd/s/5fa54d704a5f6ddf23b1dbe2a9a099dfe21b11fa7e715c58c837c2f9e9f8681a" protocol=ttrpc version=3
	Nov 29 09:23:37 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:37.175905235Z" level=info msg="Container 142f1b95a243cf67c1135910d74d40a419cf06ed0bf5077f6568ab892160c97a: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:23:37 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:37.192703047Z" level=info msg="CreateContainer within sandbox \"5369c6303bf8e2c5b80e7f9fdb8af50f09c0a14c9d3bfc7f532cf76fee6c4d3c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"142f1b95a243cf67c1135910d74d40a419cf06ed0bf5077f6568ab892160c97a\""
	Nov 29 09:23:37 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:37.195857289Z" level=info msg="StartContainer for \"142f1b95a243cf67c1135910d74d40a419cf06ed0bf5077f6568ab892160c97a\""
	Nov 29 09:23:37 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:37.198787465Z" level=info msg="connecting to shim 142f1b95a243cf67c1135910d74d40a419cf06ed0bf5077f6568ab892160c97a" address="unix:///run/containerd/s/9bbe579b1fde093cc80ae316fad875a7a9d8b9993ae392aab34218490d6f8471" protocol=ttrpc version=3
	Nov 29 09:23:37 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:37.262855492Z" level=info msg="StartContainer for \"71da9bf637f997fd41268b358e43d421edcd0b0f351938a5905ffb2acc33b21f\" returns successfully"
	Nov 29 09:23:37 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:37.288354927Z" level=info msg="StartContainer for \"142f1b95a243cf67c1135910d74d40a419cf06ed0bf5077f6568ab892160c97a\" returns successfully"
	Nov 29 09:23:40 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:40.129429707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:17a6629d-52f0-4e8d-8452-1bf975092ed9,Namespace:default,Attempt:0,}"
	Nov 29 09:23:40 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:40.201463845Z" level=info msg="connecting to shim efd8e18d67cf06c4cefcc26ca617e5fcc785c60802972c8c187040074b249962" address="unix:///run/containerd/s/ef4ffd0eb0b0e4bb117ff41f24dcf2c6602ce90ea44fd96fbb282017970f2120" namespace=k8s.io protocol=ttrpc version=3
	Nov 29 09:23:40 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:40.283088244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:17a6629d-52f0-4e8d-8452-1bf975092ed9,Namespace:default,Attempt:0,} returns sandbox id \"efd8e18d67cf06c4cefcc26ca617e5fcc785c60802972c8c187040074b249962\""
	Nov 29 09:23:40 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:40.288093041Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.414880141Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.416779303Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937188"
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.419297882Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.424319861Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.425283879Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.137109582s"
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.425431655Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.432164684Z" level=info msg="CreateContainer within sandbox \"efd8e18d67cf06c4cefcc26ca617e5fcc785c60802972c8c187040074b249962\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.444062183Z" level=info msg="Container 79b2b865b7fe86303d0af05fef1d8540a010aa143c17fa9f335a88f68da9b2c6: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.458364849Z" level=info msg="CreateContainer within sandbox \"efd8e18d67cf06c4cefcc26ca617e5fcc785c60802972c8c187040074b249962\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"79b2b865b7fe86303d0af05fef1d8540a010aa143c17fa9f335a88f68da9b2c6\""
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.459539166Z" level=info msg="StartContainer for \"79b2b865b7fe86303d0af05fef1d8540a010aa143c17fa9f335a88f68da9b2c6\""
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.461619982Z" level=info msg="connecting to shim 79b2b865b7fe86303d0af05fef1d8540a010aa143c17fa9f335a88f68da9b2c6" address="unix:///run/containerd/s/ef4ffd0eb0b0e4bb117ff41f24dcf2c6602ce90ea44fd96fbb282017970f2120" protocol=ttrpc version=3
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.540461652Z" level=info msg="StartContainer for \"79b2b865b7fe86303d0af05fef1d8540a010aa143c17fa9f335a88f68da9b2c6\" returns successfully"
	
	
	==> coredns [142f1b95a243cf67c1135910d74d40a419cf06ed0bf5077f6568ab892160c97a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48306 - 13485 "HINFO IN 6034152585137040996.4390996263943985383. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023323184s
	
	
	==> describe nodes <==
	Name:               embed-certs-086358
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-086358
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=embed-certs-086358
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_22_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:22:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-086358
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:23:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:23:36 +0000   Sat, 29 Nov 2025 09:22:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:23:36 +0000   Sat, 29 Nov 2025 09:22:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:23:36 +0000   Sat, 29 Nov 2025 09:22:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:23:36 +0000   Sat, 29 Nov 2025 09:23:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-086358
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                f920f567-c286-45f2-93bb-f2ebbdb3ee93
	  Boot ID:                    6647f078-4edd-40c5-9d0e-49eb5ed00bd7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-2fhrs                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-embed-certs-086358                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-2x7dg                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-embed-certs-086358             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-embed-certs-086358    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-2qzkl                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-embed-certs-086358             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 52s                kube-proxy       
	  Normal   NodeHasSufficientMemory  68s (x8 over 68s)  kubelet          Node embed-certs-086358 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 68s)  kubelet          Node embed-certs-086358 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x7 over 68s)  kubelet          Node embed-certs-086358 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  68s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  60s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node embed-certs-086358 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node embed-certs-086358 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node embed-certs-086358 status is now: NodeHasSufficientPID
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           55s                node-controller  Node embed-certs-086358 event: Registered Node embed-certs-086358 in Controller
	  Normal   NodeReady                13s                kubelet          Node embed-certs-086358 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014634] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.570975] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032231] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767655] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.282538] kauditd_printk_skb: 36 callbacks suppressed
	[Nov29 08:39] FS-Cache: Duplicate cookie detected
	[  +0.000727] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001077] FS-Cache: O-cookie d=00000000b08097f7{9P.session} n=00000000a17ba85f
	[  +0.001074] FS-Cache: O-key=[10] '34323935323231393134'
	[  +0.000776] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000954] FS-Cache: N-cookie d=00000000b08097f7{9P.session} n=00000000534469ad
	[  +0.001092] FS-Cache: N-key=[10] '34323935323231393134'
	[Nov29 09:19] hrtimer: interrupt took 12545193 ns
	
	
	==> etcd [c0577342962bca3db58da726fcac889eec75133a917bc6e9cf1feb6a3f337e59] <==
	{"level":"warn","ts":"2025-11-29T09:22:43.972114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.035078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.054202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.073237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.094118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.118163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.135247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.152114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.170617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.233901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.264311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.340684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.390950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.429184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.490890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.525590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.552210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.590008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.641218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.667668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.710843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.755110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.784862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.831937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:45.024745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59654","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:23:49 up  1:06,  0 user,  load average: 3.55, 3.56, 3.00
	Linux embed-certs-086358 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [463144a8348fe09690fae6daaf1a23bd6db8686609b47d2764b6e39f5bbda974] <==
	I1129 09:22:56.303897       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:22:56.380200       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 09:22:56.380342       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:22:56.380356       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:22:56.380373       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:22:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:22:56.583621       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:22:56.583816       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:22:56.583907       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:22:56.584959       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1129 09:23:26.584013       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1129 09:23:26.585043       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1129 09:23:26.585088       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1129 09:23:26.585154       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1129 09:23:28.184990       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:23:28.185026       1 metrics.go:72] Registering metrics
	I1129 09:23:28.185099       1 controller.go:711] "Syncing nftables rules"
	I1129 09:23:36.588732       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:23:36.588785       1 main.go:301] handling current node
	I1129 09:23:46.584731       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:23:46.584778       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9a782a50e3036c97768d6ec56613adcf9c14b720a7b95396868f2c8ae21e2c1d] <==
	I1129 09:22:46.511053       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1129 09:22:46.531272       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 09:22:46.554136       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:22:46.557209       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 09:22:46.594835       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:22:46.649600       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:22:46.650001       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1129 09:22:46.650333       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:22:47.224327       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:22:47.230639       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:22:47.230665       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:22:48.078296       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:22:48.133739       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:22:48.278521       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:22:48.298249       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1129 09:22:48.300224       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:22:48.316424       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:22:48.677857       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:22:49.115019       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:22:49.144246       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:22:49.160587       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:22:54.218565       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:22:54.249510       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:22:54.431100       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:22:54.828750       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [593a51223ee9a2a228c68dbef6b88d64186dd580dacb1aa36709e7d873bea72b] <==
	I1129 09:22:54.065392       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:22:54.065622       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 09:22:54.065712       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 09:22:54.066072       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1129 09:22:54.067383       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:22:54.067908       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 09:22:54.072460       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 09:22:54.077386       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:22:54.099839       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:22:54.100947       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1129 09:22:54.121745       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 09:22:54.121858       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 09:22:54.121940       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-086358"
	I1129 09:22:54.121980       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1129 09:22:54.123013       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:22:54.123042       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 09:22:54.124672       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1129 09:22:54.124723       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1129 09:22:54.124762       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1129 09:22:54.124767       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1129 09:22:54.124772       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1129 09:22:54.129216       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 09:22:54.133714       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:22:54.173334       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-086358" podCIDRs=["10.244.0.0/24"]
	I1129 09:23:39.127957       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0221d25cfd4ddcdcc16f4f520608d24d9dfa2e0df4ef9c1eb5526108818141b0] <==
	I1129 09:22:56.150869       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:22:56.330433       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:22:56.432577       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:22:56.432641       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1129 09:22:56.432725       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:22:56.573866       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:22:56.574171       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:22:56.591892       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:22:56.592423       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:22:56.592872       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:22:56.594343       1 config.go:200] "Starting service config controller"
	I1129 09:22:56.594526       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:22:56.594646       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:22:56.594714       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:22:56.594813       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:22:56.594873       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:22:56.595595       1 config.go:309] "Starting node config controller"
	I1129 09:22:56.595701       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:22:56.595786       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:22:56.695895       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:22:56.696040       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 09:22:56.696414       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [63d03d07ac0a1758cd00c71c131868b3e936406ac3079afa609a554f2c6c1c6a] <==
	I1129 09:22:47.023097       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 09:22:47.025333       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 09:22:47.029442       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:22:47.029491       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1129 09:22:47.031332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:22:47.040271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 09:22:47.040317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:22:47.040351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:22:47.040391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:22:47.040423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:22:47.040455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:22:47.040487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:22:47.040520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:22:47.040551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:22:47.040581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:22:47.048426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:22:47.049000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:22:47.049189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:22:47.049625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:22:47.049892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:22:47.050401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 09:22:47.050588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:22:47.051799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1129 09:22:47.859215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1129 09:22:49.630532       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:22:50 embed-certs-086358 kubelet[1465]: I1129 09:22:50.451377    1465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-086358" podStartSLOduration=1.4513444500000001 podStartE2EDuration="1.45134445s" podCreationTimestamp="2025-11-29 09:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:22:50.44854847 +0000 UTC m=+1.382193162" watchObservedRunningTime="2025-11-29 09:22:50.45134445 +0000 UTC m=+1.384989282"
	Nov 29 09:22:50 embed-certs-086358 kubelet[1465]: I1129 09:22:50.451613    1465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-086358" podStartSLOduration=1.451594347 podStartE2EDuration="1.451594347s" podCreationTimestamp="2025-11-29 09:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:22:50.432170164 +0000 UTC m=+1.365814848" watchObservedRunningTime="2025-11-29 09:22:50.451594347 +0000 UTC m=+1.385239023"
	Nov 29 09:22:50 embed-certs-086358 kubelet[1465]: I1129 09:22:50.484347    1465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-086358" podStartSLOduration=1.484327763 podStartE2EDuration="1.484327763s" podCreationTimestamp="2025-11-29 09:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:22:50.466349628 +0000 UTC m=+1.399994312" watchObservedRunningTime="2025-11-29 09:22:50.484327763 +0000 UTC m=+1.417972447"
	Nov 29 09:22:50 embed-certs-086358 kubelet[1465]: I1129 09:22:50.517834    1465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-086358" podStartSLOduration=1.517811058 podStartE2EDuration="1.517811058s" podCreationTimestamp="2025-11-29 09:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:22:50.484908985 +0000 UTC m=+1.418553677" watchObservedRunningTime="2025-11-29 09:22:50.517811058 +0000 UTC m=+1.451455742"
	Nov 29 09:22:54 embed-certs-086358 kubelet[1465]: I1129 09:22:54.216867    1465 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 29 09:22:54 embed-certs-086358 kubelet[1465]: I1129 09:22:54.232611    1465 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 09:22:54 embed-certs-086358 kubelet[1465]: I1129 09:22:54.982583    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sznz\" (UniqueName: \"kubernetes.io/projected/4945072e-8049-437d-8593-8f1de5316222-kube-api-access-9sznz\") pod \"kindnet-2x7dg\" (UID: \"4945072e-8049-437d-8593-8f1de5316222\") " pod="kube-system/kindnet-2x7dg"
	Nov 29 09:22:54 embed-certs-086358 kubelet[1465]: I1129 09:22:54.982860    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jgzp\" (UniqueName: \"kubernetes.io/projected/2def38f6-3e34-4e81-a66a-59f10b8fc3a0-kube-api-access-9jgzp\") pod \"kube-proxy-2qzkl\" (UID: \"2def38f6-3e34-4e81-a66a-59f10b8fc3a0\") " pod="kube-system/kube-proxy-2qzkl"
	Nov 29 09:22:54 embed-certs-086358 kubelet[1465]: I1129 09:22:54.982900    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2def38f6-3e34-4e81-a66a-59f10b8fc3a0-kube-proxy\") pod \"kube-proxy-2qzkl\" (UID: \"2def38f6-3e34-4e81-a66a-59f10b8fc3a0\") " pod="kube-system/kube-proxy-2qzkl"
	Nov 29 09:22:54 embed-certs-086358 kubelet[1465]: I1129 09:22:54.983031    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2def38f6-3e34-4e81-a66a-59f10b8fc3a0-lib-modules\") pod \"kube-proxy-2qzkl\" (UID: \"2def38f6-3e34-4e81-a66a-59f10b8fc3a0\") " pod="kube-system/kube-proxy-2qzkl"
	Nov 29 09:22:54 embed-certs-086358 kubelet[1465]: I1129 09:22:54.983057    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4945072e-8049-437d-8593-8f1de5316222-xtables-lock\") pod \"kindnet-2x7dg\" (UID: \"4945072e-8049-437d-8593-8f1de5316222\") " pod="kube-system/kindnet-2x7dg"
	Nov 29 09:22:54 embed-certs-086358 kubelet[1465]: I1129 09:22:54.983267    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2def38f6-3e34-4e81-a66a-59f10b8fc3a0-xtables-lock\") pod \"kube-proxy-2qzkl\" (UID: \"2def38f6-3e34-4e81-a66a-59f10b8fc3a0\") " pod="kube-system/kube-proxy-2qzkl"
	Nov 29 09:22:54 embed-certs-086358 kubelet[1465]: I1129 09:22:54.983296    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4945072e-8049-437d-8593-8f1de5316222-cni-cfg\") pod \"kindnet-2x7dg\" (UID: \"4945072e-8049-437d-8593-8f1de5316222\") " pod="kube-system/kindnet-2x7dg"
	Nov 29 09:22:54 embed-certs-086358 kubelet[1465]: I1129 09:22:54.983529    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4945072e-8049-437d-8593-8f1de5316222-lib-modules\") pod \"kindnet-2x7dg\" (UID: \"4945072e-8049-437d-8593-8f1de5316222\") " pod="kube-system/kindnet-2x7dg"
	Nov 29 09:22:55 embed-certs-086358 kubelet[1465]: I1129 09:22:55.127627    1465 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 29 09:22:56 embed-certs-086358 kubelet[1465]: I1129 09:22:56.534637    1465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2x7dg" podStartSLOduration=2.534587893 podStartE2EDuration="2.534587893s" podCreationTimestamp="2025-11-29 09:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:22:56.503433088 +0000 UTC m=+7.437077797" watchObservedRunningTime="2025-11-29 09:22:56.534587893 +0000 UTC m=+7.468232577"
	Nov 29 09:22:57 embed-certs-086358 kubelet[1465]: I1129 09:22:57.348205    1465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2qzkl" podStartSLOduration=3.348185323 podStartE2EDuration="3.348185323s" podCreationTimestamp="2025-11-29 09:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:22:56.545519823 +0000 UTC m=+7.479164516" watchObservedRunningTime="2025-11-29 09:22:57.348185323 +0000 UTC m=+8.281829998"
	Nov 29 09:23:36 embed-certs-086358 kubelet[1465]: I1129 09:23:36.643565    1465 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 09:23:36 embed-certs-086358 kubelet[1465]: I1129 09:23:36.860330    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e08be393-d772-4606-bb5b-b754bee79505-tmp\") pod \"storage-provisioner\" (UID: \"e08be393-d772-4606-bb5b-b754bee79505\") " pod="kube-system/storage-provisioner"
	Nov 29 09:23:36 embed-certs-086358 kubelet[1465]: I1129 09:23:36.860375    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smzfk\" (UniqueName: \"kubernetes.io/projected/e08be393-d772-4606-bb5b-b754bee79505-kube-api-access-smzfk\") pod \"storage-provisioner\" (UID: \"e08be393-d772-4606-bb5b-b754bee79505\") " pod="kube-system/storage-provisioner"
	Nov 29 09:23:36 embed-certs-086358 kubelet[1465]: I1129 09:23:36.860400    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w8lz\" (UniqueName: \"kubernetes.io/projected/224b9d8a-65f2-44ed-b5b3-9b8f39ac6854-kube-api-access-8w8lz\") pod \"coredns-66bc5c9577-2fhrs\" (UID: \"224b9d8a-65f2-44ed-b5b3-9b8f39ac6854\") " pod="kube-system/coredns-66bc5c9577-2fhrs"
	Nov 29 09:23:36 embed-certs-086358 kubelet[1465]: I1129 09:23:36.860423    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/224b9d8a-65f2-44ed-b5b3-9b8f39ac6854-config-volume\") pod \"coredns-66bc5c9577-2fhrs\" (UID: \"224b9d8a-65f2-44ed-b5b3-9b8f39ac6854\") " pod="kube-system/coredns-66bc5c9577-2fhrs"
	Nov 29 09:23:37 embed-certs-086358 kubelet[1465]: I1129 09:23:37.611622    1465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2fhrs" podStartSLOduration=43.611586205 podStartE2EDuration="43.611586205s" podCreationTimestamp="2025-11-29 09:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:23:37.582263068 +0000 UTC m=+48.515907743" watchObservedRunningTime="2025-11-29 09:23:37.611586205 +0000 UTC m=+48.545230889"
	Nov 29 09:23:39 embed-certs-086358 kubelet[1465]: I1129 09:23:39.813344    1465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=43.813321959 podStartE2EDuration="43.813321959s" podCreationTimestamp="2025-11-29 09:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:23:37.650344944 +0000 UTC m=+48.583989628" watchObservedRunningTime="2025-11-29 09:23:39.813321959 +0000 UTC m=+50.746966635"
	Nov 29 09:23:39 embed-certs-086358 kubelet[1465]: I1129 09:23:39.986379    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jtht\" (UniqueName: \"kubernetes.io/projected/17a6629d-52f0-4e8d-8452-1bf975092ed9-kube-api-access-6jtht\") pod \"busybox\" (UID: \"17a6629d-52f0-4e8d-8452-1bf975092ed9\") " pod="default/busybox"
	
	
	==> storage-provisioner [71da9bf637f997fd41268b358e43d421edcd0b0f351938a5905ffb2acc33b21f] <==
	I1129 09:23:37.294734       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:23:37.311117       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:23:37.311214       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:23:37.316998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:37.339912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:23:37.340106       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:23:37.340398       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-086358_2e07c11f-7260-41e3-9e3b-daaadcf9b0d5!
	I1129 09:23:37.341957       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e0af4b5d-59f0-45a0-9470-87209f513e0b", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-086358_2e07c11f-7260-41e3-9e3b-daaadcf9b0d5 became leader
	W1129 09:23:37.360035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:37.364013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:23:37.441152       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-086358_2e07c11f-7260-41e3-9e3b-daaadcf9b0d5!
	W1129 09:23:39.367932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:39.373243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:41.376819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:41.381793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:43.385154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:43.389964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:45.393337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:45.401306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:47.404357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:47.410223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:49.414654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:49.423124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-086358 -n embed-certs-086358
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-086358 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-086358
helpers_test.go:243: (dbg) docker inspect embed-certs-086358:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a18e36fe3f748fd904f7bea90b51373b9c3b8803336a470460bdfb916aa60d62",
	        "Created": "2025-11-29T09:22:24.463403992Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 236796,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:22:24.527022025Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/a18e36fe3f748fd904f7bea90b51373b9c3b8803336a470460bdfb916aa60d62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a18e36fe3f748fd904f7bea90b51373b9c3b8803336a470460bdfb916aa60d62/hostname",
	        "HostsPath": "/var/lib/docker/containers/a18e36fe3f748fd904f7bea90b51373b9c3b8803336a470460bdfb916aa60d62/hosts",
	        "LogPath": "/var/lib/docker/containers/a18e36fe3f748fd904f7bea90b51373b9c3b8803336a470460bdfb916aa60d62/a18e36fe3f748fd904f7bea90b51373b9c3b8803336a470460bdfb916aa60d62-json.log",
	        "Name": "/embed-certs-086358",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-086358:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-086358",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a18e36fe3f748fd904f7bea90b51373b9c3b8803336a470460bdfb916aa60d62",
	                "LowerDir": "/var/lib/docker/overlay2/b2c53c864672ca3c55693f7b314c2b772fb66457a4897c27484040d38f636834-init/diff:/var/lib/docker/overlay2/fc2ab0019906b90b3f033fa414f560878b73f7ff0ebdf77a0b554a40813009d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b2c53c864672ca3c55693f7b314c2b772fb66457a4897c27484040d38f636834/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b2c53c864672ca3c55693f7b314c2b772fb66457a4897c27484040d38f636834/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b2c53c864672ca3c55693f7b314c2b772fb66457a4897c27484040d38f636834/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-086358",
	                "Source": "/var/lib/docker/volumes/embed-certs-086358/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-086358",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-086358",
	                "name.minikube.sigs.k8s.io": "embed-certs-086358",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d110d60ffd1659c6704af757e1f1f8b8b4b72fa53317af18897e73fda1e2da76",
	            "SandboxKey": "/var/run/docker/netns/d110d60ffd16",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-086358": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:13:f6:a7:47:a7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "94288028999fea4312df50c7c050414cb2c3cf91bd0cc6d77dc6860b9f740f8b",
	                    "EndpointID": "7d6cd273a55a9b4323f33f00f04211b82bb0a0d959d56374b2c62d3e8f8bdf34",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-086358",
	                        "a18e36fe3f74"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-086358 -n embed-certs-086358
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-086358 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-086358 logs -n 25: (1.230137348s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p old-k8s-version-071895 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:20 UTC │
	│ start   │ -p cert-expiration-592440 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-592440       │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:20 UTC │
	│ delete  │ -p cert-expiration-592440                                                                                                                                                                                                                           │ cert-expiration-592440       │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:20 UTC │
	│ start   │ -p no-preload-230403 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:21 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-071895 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:20 UTC │
	│ stop    │ -p old-k8s-version-071895 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:21 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-071895 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ start   │ -p old-k8s-version-071895 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ addons  │ enable metrics-server -p no-preload-230403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ stop    │ -p no-preload-230403 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ addons  │ enable dashboard -p no-preload-230403 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ start   │ -p no-preload-230403 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:22 UTC │
	│ image   │ old-k8s-version-071895 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ pause   │ -p old-k8s-version-071895 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ unpause │ -p old-k8s-version-071895 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ delete  │ -p old-k8s-version-071895                                                                                                                                                                                                                           │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ delete  │ -p old-k8s-version-071895                                                                                                                                                                                                                           │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ start   │ -p embed-certs-086358 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-086358           │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:23 UTC │
	│ image   │ no-preload-230403 image list --format=json                                                                                                                                                                                                          │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ pause   │ -p no-preload-230403 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ unpause │ -p no-preload-230403 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ delete  │ -p no-preload-230403                                                                                                                                                                                                                                │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ delete  │ -p no-preload-230403                                                                                                                                                                                                                                │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ delete  │ -p disable-driver-mounts-267340                                                                                                                                                                                                                     │ disable-driver-mounts-267340 │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ start   │ -p default-k8s-diff-port-528769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-528769 │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:22:56
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:22:56.886588  240275 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:22:56.887162  240275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:22:56.887169  240275 out.go:374] Setting ErrFile to fd 2...
	I1129 09:22:56.887174  240275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:22:56.887446  240275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
	I1129 09:22:56.887876  240275 out.go:368] Setting JSON to false
	I1129 09:22:56.888888  240275 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3928,"bootTime":1764404249,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1129 09:22:56.888960  240275 start.go:143] virtualization:  
	I1129 09:22:56.893709  240275 out.go:179] * [default-k8s-diff-port-528769] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 09:22:56.896949  240275 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:22:56.897007  240275 notify.go:221] Checking for updates...
	I1129 09:22:56.903079  240275 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:22:56.905889  240275 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:22:56.908856  240275 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube
	I1129 09:22:56.911783  240275 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 09:22:56.914660  240275 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:22:56.918212  240275 config.go:182] Loaded profile config "embed-certs-086358": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:22:56.918329  240275 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:22:56.953130  240275 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 09:22:56.953251  240275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:22:57.018929  240275 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 09:22:57.006330515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:22:57.019039  240275 docker.go:319] overlay module found
	I1129 09:22:57.022830  240275 out.go:179] * Using the docker driver based on user configuration
	I1129 09:22:57.025807  240275 start.go:309] selected driver: docker
	I1129 09:22:57.025832  240275 start.go:927] validating driver "docker" against <nil>
	I1129 09:22:57.025847  240275 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:22:57.026584  240275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:22:57.087124  240275 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 09:22:57.077423086 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:22:57.087285  240275 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 09:22:57.087525  240275 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:22:57.096107  240275 out.go:179] * Using Docker driver with root privileges
	I1129 09:22:57.099816  240275 cni.go:84] Creating CNI manager for ""
	I1129 09:22:57.099901  240275 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:22:57.099913  240275 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:22:57.100009  240275 start.go:353] cluster config:
	{Name:default-k8s-diff-port-528769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-528769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:22:57.103109  240275 out.go:179] * Starting "default-k8s-diff-port-528769" primary control-plane node in "default-k8s-diff-port-528769" cluster
	I1129 09:22:57.106008  240275 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1129 09:22:57.108840  240275 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:22:57.111783  240275 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:22:57.111830  240275 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1129 09:22:57.111840  240275 cache.go:65] Caching tarball of preloaded images
	I1129 09:22:57.111877  240275 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:22:57.111925  240275 preload.go:238] Found /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1129 09:22:57.111936  240275 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1129 09:22:57.112044  240275 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/config.json ...
	I1129 09:22:57.112061  240275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/config.json: {Name:mk53a836b7bb385e995fdae1587bf5271cb50e0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:22:57.132953  240275 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:22:57.132978  240275 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:22:57.132999  240275 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:22:57.133032  240275 start.go:360] acquireMachinesLock for default-k8s-diff-port-528769: {Name:mk914e0f0d088ade1b42caaad044a8f91bf65d7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:22:57.133152  240275 start.go:364] duration metric: took 98.873µs to acquireMachinesLock for "default-k8s-diff-port-528769"
	I1129 09:22:57.133182  240275 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-528769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-528769 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disabl
eCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:22:57.133250  240275 start.go:125] createHost starting for "" (driver="docker")
	I1129 09:22:56.809506  236407 addons.go:530] duration metric: took 2.476227417s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1129 09:22:58.256069  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	I1129 09:22:57.136560  240275 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 09:22:57.136841  240275 start.go:159] libmachine.API.Create for "default-k8s-diff-port-528769" (driver="docker")
	I1129 09:22:57.136880  240275 client.go:173] LocalClient.Create starting
	I1129 09:22:57.136953  240275 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem
	I1129 09:22:57.136994  240275 main.go:143] libmachine: Decoding PEM data...
	I1129 09:22:57.137021  240275 main.go:143] libmachine: Parsing certificate...
	I1129 09:22:57.137084  240275 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem
	I1129 09:22:57.137106  240275 main.go:143] libmachine: Decoding PEM data...
	I1129 09:22:57.137122  240275 main.go:143] libmachine: Parsing certificate...
	I1129 09:22:57.137512  240275 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-528769 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:22:57.153582  240275 cli_runner.go:211] docker network inspect default-k8s-diff-port-528769 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:22:57.153672  240275 network_create.go:284] running [docker network inspect default-k8s-diff-port-528769] to gather additional debugging logs...
	I1129 09:22:57.153695  240275 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-528769
	W1129 09:22:57.170156  240275 cli_runner.go:211] docker network inspect default-k8s-diff-port-528769 returned with exit code 1
	I1129 09:22:57.170195  240275 network_create.go:287] error running [docker network inspect default-k8s-diff-port-528769]: docker network inspect default-k8s-diff-port-528769: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-528769 not found
	I1129 09:22:57.170209  240275 network_create.go:289] output of [docker network inspect default-k8s-diff-port-528769]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-528769 not found
	
	** /stderr **
	I1129 09:22:57.170338  240275 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:22:57.187501  240275 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8664e809540f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:5a:a5:48:89:fb} reservation:<nil>}
	I1129 09:22:57.187846  240275 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fe5a1fed3d29 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8e:0c:ca:69:14:77} reservation:<nil>}
	I1129 09:22:57.188187  240275 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c3b36bc67c6b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:2d:06:dd:2d:03} reservation:<nil>}
	I1129 09:22:57.188477  240275 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-94288028999f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6a:eb:14:45:1a:19} reservation:<nil>}
	I1129 09:22:57.188941  240275 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e92b0}
	I1129 09:22:57.188970  240275 network_create.go:124] attempt to create docker network default-k8s-diff-port-528769 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1129 09:22:57.189028  240275 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-528769 default-k8s-diff-port-528769
	I1129 09:22:57.259393  240275 network_create.go:108] docker network default-k8s-diff-port-528769 192.168.85.0/24 created
	I1129 09:22:57.259429  240275 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-528769" container
	I1129 09:22:57.259516  240275 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:22:57.281792  240275 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-528769 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-528769 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:22:57.300032  240275 oci.go:103] Successfully created a docker volume default-k8s-diff-port-528769
	I1129 09:22:57.300130  240275 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-528769-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-528769 --entrypoint /usr/bin/test -v default-k8s-diff-port-528769:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:22:57.848603  240275 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-528769
	I1129 09:22:57.848705  240275 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:22:57.848715  240275 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 09:22:57.848784  240275 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-528769:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1129 09:23:00.270798  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	W1129 09:23:02.757801  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	I1129 09:23:02.541498  240275 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-528769:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.692676691s)
	I1129 09:23:02.541533  240275 kic.go:203] duration metric: took 4.692815597s to extract preloaded images to volume ...
	W1129 09:23:02.541685  240275 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1129 09:23:02.541803  240275 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:23:02.597303  240275 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-528769 --name default-k8s-diff-port-528769 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-528769 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-528769 --network default-k8s-diff-port-528769 --ip 192.168.85.2 --volume default-k8s-diff-port-528769:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:23:02.940132  240275 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-528769 --format={{.State.Running}}
	I1129 09:23:02.963590  240275 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-528769 --format={{.State.Status}}
	I1129 09:23:02.987992  240275 cli_runner.go:164] Run: docker exec default-k8s-diff-port-528769 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:23:03.055879  240275 oci.go:144] the created container "default-k8s-diff-port-528769" has a running status.
	I1129 09:23:03.055908  240275 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-2317/.minikube/machines/default-k8s-diff-port-528769/id_rsa...
	I1129 09:23:03.329893  240275 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-2317/.minikube/machines/default-k8s-diff-port-528769/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:23:03.354333  240275 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-528769 --format={{.State.Status}}
	I1129 09:23:03.380414  240275 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:23:03.380440  240275 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-528769 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:23:03.443668  240275 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-528769 --format={{.State.Status}}
	I1129 09:23:03.476147  240275 machine.go:94] provisionDockerMachine start ...
	I1129 09:23:03.476243  240275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-528769
	I1129 09:23:03.503234  240275 main.go:143] libmachine: Using SSH client type: native
	I1129 09:23:03.503577  240275 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1129 09:23:03.503585  240275 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:23:03.504318  240275 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1129 09:23:06.660568  240275 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-528769
	
	I1129 09:23:06.660593  240275 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-528769"
	I1129 09:23:06.660683  240275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-528769
	I1129 09:23:06.678983  240275 main.go:143] libmachine: Using SSH client type: native
	I1129 09:23:06.679306  240275 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1129 09:23:06.679326  240275 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-528769 && echo "default-k8s-diff-port-528769" | sudo tee /etc/hostname
	I1129 09:23:06.842728  240275 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-528769
	
	I1129 09:23:06.842945  240275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-528769
	I1129 09:23:06.861099  240275 main.go:143] libmachine: Using SSH client type: native
	I1129 09:23:06.861436  240275 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1129 09:23:06.861464  240275 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-528769' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-528769/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-528769' | sudo tee -a /etc/hosts; 
				fi
			fi
	W1129 09:23:05.256346  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	W1129 09:23:07.257251  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	I1129 09:23:07.014053  240275 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:23:07.014098  240275 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-2317/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-2317/.minikube}
	I1129 09:23:07.014142  240275 ubuntu.go:190] setting up certificates
	I1129 09:23:07.014162  240275 provision.go:84] configureAuth start
	I1129 09:23:07.014265  240275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-528769
	I1129 09:23:07.035244  240275 provision.go:143] copyHostCerts
	I1129 09:23:07.035313  240275 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem, removing ...
	I1129 09:23:07.035322  240275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem
	I1129 09:23:07.035403  240275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem (1123 bytes)
	I1129 09:23:07.035500  240275 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem, removing ...
	I1129 09:23:07.035513  240275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem
	I1129 09:23:07.035540  240275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem (1679 bytes)
	I1129 09:23:07.035591  240275 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem, removing ...
	I1129 09:23:07.035595  240275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem
	I1129 09:23:07.035622  240275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem (1082 bytes)
	I1129 09:23:07.035701  240275 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-528769 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-528769 localhost minikube]
	I1129 09:23:07.221271  240275 provision.go:177] copyRemoteCerts
	I1129 09:23:07.221339  240275 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:23:07.221392  240275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-528769
	I1129 09:23:07.239928  240275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/default-k8s-diff-port-528769/id_rsa Username:docker}
	I1129 09:23:07.349722  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:23:07.368126  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 09:23:07.387319  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1129 09:23:07.407076  240275 provision.go:87] duration metric: took 392.871055ms to configureAuth
	I1129 09:23:07.407116  240275 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:23:07.407476  240275 config.go:182] Loaded profile config "default-k8s-diff-port-528769": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:23:07.407499  240275 machine.go:97] duration metric: took 3.931328976s to provisionDockerMachine
	I1129 09:23:07.407507  240275 client.go:176] duration metric: took 10.270617706s to LocalClient.Create
	I1129 09:23:07.407533  240275 start.go:167] duration metric: took 10.270692381s to libmachine.API.Create "default-k8s-diff-port-528769"
	I1129 09:23:07.407541  240275 start.go:293] postStartSetup for "default-k8s-diff-port-528769" (driver="docker")
	I1129 09:23:07.407563  240275 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:23:07.407643  240275 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:23:07.407690  240275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-528769
	I1129 09:23:07.427475  240275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/default-k8s-diff-port-528769/id_rsa Username:docker}
	I1129 09:23:07.542270  240275 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:23:07.545896  240275 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:23:07.545927  240275 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:23:07.545941  240275 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-2317/.minikube/addons for local assets ...
	I1129 09:23:07.546005  240275 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-2317/.minikube/files for local assets ...
	I1129 09:23:07.546089  240275 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem -> 41372.pem in /etc/ssl/certs
	I1129 09:23:07.546198  240275 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:23:07.555192  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem --> /etc/ssl/certs/41372.pem (1708 bytes)
	I1129 09:23:07.574057  240275 start.go:296] duration metric: took 166.500841ms for postStartSetup
	I1129 09:23:07.574488  240275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-528769
	I1129 09:23:07.591751  240275 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/config.json ...
	I1129 09:23:07.592046  240275 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:23:07.592094  240275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-528769
	I1129 09:23:07.609542  240275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/default-k8s-diff-port-528769/id_rsa Username:docker}
	I1129 09:23:07.714196  240275 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:23:07.719192  240275 start.go:128] duration metric: took 10.585927311s to createHost
	I1129 09:23:07.719219  240275 start.go:83] releasing machines lock for "default-k8s-diff-port-528769", held for 10.586053572s
	I1129 09:23:07.719301  240275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-528769
	I1129 09:23:07.742045  240275 ssh_runner.go:195] Run: cat /version.json
	I1129 09:23:07.742093  240275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-528769
	I1129 09:23:07.742324  240275 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:23:07.742370  240275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-528769
	I1129 09:23:07.777333  240275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/default-k8s-diff-port-528769/id_rsa Username:docker}
	I1129 09:23:07.790431  240275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/default-k8s-diff-port-528769/id_rsa Username:docker}
	I1129 09:23:07.888550  240275 ssh_runner.go:195] Run: systemctl --version
	I1129 09:23:07.976986  240275 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:23:07.988874  240275 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:23:07.988966  240275 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:23:08.028891  240275 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1129 09:23:08.028920  240275 start.go:496] detecting cgroup driver to use...
	I1129 09:23:08.028967  240275 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 09:23:08.029037  240275 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1129 09:23:08.047017  240275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1129 09:23:08.062615  240275 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:23:08.062682  240275 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:23:08.081653  240275 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:23:08.101407  240275 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:23:08.228535  240275 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:23:08.369349  240275 docker.go:234] disabling docker service ...
	I1129 09:23:08.369467  240275 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:23:08.392300  240275 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:23:08.407379  240275 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:23:08.542008  240275 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:23:08.675182  240275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:23:08.691017  240275 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:23:08.706677  240275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1129 09:23:08.716649  240275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1129 09:23:08.725875  240275 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1129 09:23:08.725952  240275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1129 09:23:08.736397  240275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:23:08.746920  240275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1129 09:23:08.760702  240275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:23:08.770566  240275 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:23:08.780565  240275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1129 09:23:08.790252  240275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1129 09:23:08.799658  240275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1129 09:23:08.809215  240275 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:23:08.818703  240275 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:23:08.827039  240275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:23:08.951618  240275 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1129 09:23:09.108759  240275 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1129 09:23:09.108881  240275 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1129 09:23:09.112994  240275 start.go:564] Will wait 60s for crictl version
	I1129 09:23:09.113108  240275 ssh_runner.go:195] Run: which crictl
	I1129 09:23:09.116782  240275 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:23:09.143455  240275 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1129 09:23:09.143588  240275 ssh_runner.go:195] Run: containerd --version
	I1129 09:23:09.164571  240275 ssh_runner.go:195] Run: containerd --version
	I1129 09:23:09.194615  240275 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1129 09:23:09.197541  240275 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-528769 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:23:09.213699  240275 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1129 09:23:09.217761  240275 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:23:09.232515  240275 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-528769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-528769 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:23:09.232673  240275 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:23:09.232748  240275 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:23:09.261041  240275 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:23:09.261065  240275 containerd.go:534] Images already preloaded, skipping extraction
	I1129 09:23:09.261124  240275 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:23:09.286376  240275 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:23:09.286411  240275 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:23:09.286420  240275 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1129 09:23:09.286554  240275 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-528769 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-528769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:23:09.286631  240275 ssh_runner.go:195] Run: sudo crictl info
	I1129 09:23:09.311567  240275 cni.go:84] Creating CNI manager for ""
	I1129 09:23:09.311593  240275 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:23:09.311606  240275 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:23:09.311631  240275 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-528769 NodeName:default-k8s-diff-port-528769 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:23:09.311750  240275 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-528769"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:23:09.311817  240275 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:23:09.319701  240275 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:23:09.319770  240275 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:23:09.328133  240275 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1129 09:23:09.352587  240275 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:23:09.365839  240275 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2241 bytes)
	I1129 09:23:09.379625  240275 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:23:09.383306  240275 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:23:09.392827  240275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:23:09.521000  240275 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:23:09.539047  240275 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769 for IP: 192.168.85.2
	I1129 09:23:09.539067  240275 certs.go:195] generating shared ca certs ...
	I1129 09:23:09.539083  240275 certs.go:227] acquiring lock for ca certs: {Name:mke655c14945a8520f2f9de36531df923afb2bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:23:09.539223  240275 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-2317/.minikube/ca.key
	I1129 09:23:09.539275  240275 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.key
	I1129 09:23:09.539285  240275 certs.go:257] generating profile certs ...
	I1129 09:23:09.539339  240275 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/client.key
	I1129 09:23:09.539356  240275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/client.crt with IP's: []
	I1129 09:23:09.806954  240275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/client.crt ...
	I1129 09:23:09.806989  240275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/client.crt: {Name:mkccb154cd5bbc2795906704a4034218f3573327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:23:09.807861  240275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/client.key ...
	I1129 09:23:09.807880  240275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/client.key: {Name:mkeca98bac5795c13ad059b9a36eb31374878d65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:23:09.807979  240275 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.key.10155db3
	I1129 09:23:09.808005  240275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.crt.10155db3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1129 09:23:09.942874  240275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.crt.10155db3 ...
	I1129 09:23:09.942910  240275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.crt.10155db3: {Name:mka6229a7322599f23ec94877297394ab51f4eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:23:09.943135  240275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.key.10155db3 ...
	I1129 09:23:09.943154  240275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.key.10155db3: {Name:mk170c87ed21393949dbb46aaabd8dae18f2b31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:23:09.943250  240275 certs.go:382] copying /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.crt.10155db3 -> /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.crt
	I1129 09:23:09.943331  240275 certs.go:386] copying /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.key.10155db3 -> /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.key
	I1129 09:23:09.943407  240275 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/proxy-client.key
	I1129 09:23:09.943426  240275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/proxy-client.crt with IP's: []
	I1129 09:23:10.054844  240275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/proxy-client.crt ...
	I1129 09:23:10.054878  240275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/proxy-client.crt: {Name:mk0ce369c67aa0068714a185899730953f48c746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:23:10.055072  240275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/proxy-client.key ...
	I1129 09:23:10.055087  240275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/proxy-client.key: {Name:mkbd581549b9876eab92384291dd3793dd4a3e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:23:10.055279  240275 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137.pem (1338 bytes)
	W1129 09:23:10.055327  240275 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137_empty.pem, impossibly tiny 0 bytes
	I1129 09:23:10.055341  240275 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:23:10.055369  240275 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem (1082 bytes)
	I1129 09:23:10.055401  240275 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:23:10.055429  240275 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem (1679 bytes)
	I1129 09:23:10.055482  240275 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem (1708 bytes)
	I1129 09:23:10.056046  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:23:10.077464  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1129 09:23:10.109432  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:23:10.133111  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:23:10.152942  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1129 09:23:10.172098  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1129 09:23:10.192097  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:23:10.212100  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:23:10.231859  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:23:10.252208  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137.pem --> /usr/share/ca-certificates/4137.pem (1338 bytes)
	I1129 09:23:10.279963  240275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem --> /usr/share/ca-certificates/41372.pem (1708 bytes)
	I1129 09:23:10.299613  240275 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:23:10.314348  240275 ssh_runner.go:195] Run: openssl version
	I1129 09:23:10.321487  240275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:23:10.330486  240275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:23:10.334626  240275 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:23:10.334735  240275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:23:10.377013  240275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:23:10.386451  240275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4137.pem && ln -fs /usr/share/ca-certificates/4137.pem /etc/ssl/certs/4137.pem"
	I1129 09:23:10.395262  240275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4137.pem
	I1129 09:23:10.399226  240275 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:36 /usr/share/ca-certificates/4137.pem
	I1129 09:23:10.399331  240275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4137.pem
	I1129 09:23:10.442777  240275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4137.pem /etc/ssl/certs/51391683.0"
	I1129 09:23:10.451672  240275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41372.pem && ln -fs /usr/share/ca-certificates/41372.pem /etc/ssl/certs/41372.pem"
	I1129 09:23:10.460746  240275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41372.pem
	I1129 09:23:10.464938  240275 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:36 /usr/share/ca-certificates/41372.pem
	I1129 09:23:10.465007  240275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41372.pem
	I1129 09:23:10.518834  240275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41372.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:23:10.529634  240275 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:23:10.534206  240275 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:23:10.534277  240275 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-528769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-528769 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:23:10.534356  240275 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1129 09:23:10.534431  240275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:23:10.562679  240275 cri.go:89] found id: ""
	I1129 09:23:10.562794  240275 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:23:10.570957  240275 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:23:10.579051  240275 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:23:10.579130  240275 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:23:10.587258  240275 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:23:10.587292  240275 kubeadm.go:158] found existing configuration files:
	
	I1129 09:23:10.587375  240275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1129 09:23:10.595663  240275 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:23:10.595752  240275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:23:10.603803  240275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1129 09:23:10.613513  240275 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:23:10.613619  240275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:23:10.621331  240275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1129 09:23:10.629472  240275 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:23:10.629586  240275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:23:10.637322  240275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1129 09:23:10.645506  240275 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:23:10.645597  240275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:23:10.653415  240275 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:23:10.700349  240275 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 09:23:10.700446  240275 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:23:10.727649  240275 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:23:10.727783  240275 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1129 09:23:10.727849  240275 kubeadm.go:319] OS: Linux
	I1129 09:23:10.727914  240275 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:23:10.727990  240275 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1129 09:23:10.728066  240275 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:23:10.728141  240275 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:23:10.728217  240275 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:23:10.728287  240275 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:23:10.728359  240275 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:23:10.728438  240275 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:23:10.728504  240275 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1129 09:23:10.811028  240275 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:23:10.811195  240275 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:23:10.811323  240275 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 09:23:10.818596  240275 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:23:10.824908  240275 out.go:252]   - Generating certificates and keys ...
	I1129 09:23:10.825103  240275 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:23:10.825186  240275 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:23:11.106139  240275 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	W1129 09:23:09.757543  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	W1129 09:23:12.256885  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	I1129 09:23:12.962060  240275 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:23:13.287365  240275 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:23:13.797605  240275 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:23:13.885395  240275 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:23:13.885772  240275 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-528769 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1129 09:23:14.539005  240275 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:23:14.539308  240275 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-528769 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1129 09:23:15.510996  240275 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:23:15.617440  240275 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:23:16.157752  240275 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:23:16.158063  240275 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	W1129 09:23:14.257088  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	W1129 09:23:16.257836  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	I1129 09:23:16.925956  240275 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:23:17.371600  240275 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 09:23:18.014048  240275 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:23:18.493342  240275 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:23:18.712151  240275 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:23:18.713131  240275 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:23:18.717111  240275 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:23:18.720404  240275 out.go:252]   - Booting up control plane ...
	I1129 09:23:18.720514  240275 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:23:18.720595  240275 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:23:18.721362  240275 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:23:18.739183  240275 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:23:18.739299  240275 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 09:23:18.746752  240275 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 09:23:18.747091  240275 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:23:18.749942  240275 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:23:18.891551  240275 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 09:23:18.891671  240275 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 09:23:19.893060  240275 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001781342s
	I1129 09:23:19.896503  240275 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 09:23:19.896597  240275 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1129 09:23:19.896955  240275 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 09:23:19.897046  240275 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1129 09:23:18.756574  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	W1129 09:23:20.758518  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	W1129 09:23:23.256294  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	I1129 09:23:22.241412  240275 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.344329117s
	I1129 09:23:24.857356  240275 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.960792639s
	I1129 09:23:26.900001  240275 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.003223708s
	I1129 09:23:26.921571  240275 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 09:23:26.945311  240275 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 09:23:26.965618  240275 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 09:23:26.965844  240275 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-528769 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 09:23:26.980087  240275 kubeadm.go:319] [bootstrap-token] Using token: zdgyh9.0wpg2ibusyd3huv3
	I1129 09:23:26.983038  240275 out.go:252]   - Configuring RBAC rules ...
	I1129 09:23:26.983168  240275 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 09:23:26.988070  240275 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 09:23:27.009422  240275 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 09:23:27.024949  240275 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 09:23:27.032483  240275 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 09:23:27.038193  240275 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 09:23:27.309607  240275 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 09:23:27.735557  240275 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 09:23:28.309702  240275 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 09:23:28.310953  240275 kubeadm.go:319] 
	I1129 09:23:28.311024  240275 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 09:23:28.311030  240275 kubeadm.go:319] 
	I1129 09:23:28.311102  240275 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 09:23:28.311106  240275 kubeadm.go:319] 
	I1129 09:23:28.311129  240275 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 09:23:28.311185  240275 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 09:23:28.311232  240275 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 09:23:28.311235  240275 kubeadm.go:319] 
	I1129 09:23:28.311286  240275 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 09:23:28.311289  240275 kubeadm.go:319] 
	I1129 09:23:28.311334  240275 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 09:23:28.311337  240275 kubeadm.go:319] 
	I1129 09:23:28.311386  240275 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 09:23:28.311456  240275 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 09:23:28.311523  240275 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 09:23:28.311527  240275 kubeadm.go:319] 
	I1129 09:23:28.311607  240275 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 09:23:28.311689  240275 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 09:23:28.311694  240275 kubeadm.go:319] 
	I1129 09:23:28.311773  240275 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token zdgyh9.0wpg2ibusyd3huv3 \
	I1129 09:23:28.311870  240275 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:de8e56270375befae923bc70a44a39424a62093a1080181ff9ea4b4afb1027a6 \
	I1129 09:23:28.311889  240275 kubeadm.go:319] 	--control-plane 
	I1129 09:23:28.311897  240275 kubeadm.go:319] 
	I1129 09:23:28.311977  240275 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 09:23:28.311981  240275 kubeadm.go:319] 
	I1129 09:23:28.312064  240275 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token zdgyh9.0wpg2ibusyd3huv3 \
	I1129 09:23:28.312161  240275 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:de8e56270375befae923bc70a44a39424a62093a1080181ff9ea4b4afb1027a6 
	I1129 09:23:28.316589  240275 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1129 09:23:28.316857  240275 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1129 09:23:28.316970  240275 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 09:23:28.316995  240275 cni.go:84] Creating CNI manager for ""
	I1129 09:23:28.317008  240275 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:23:28.320302  240275 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1129 09:23:25.257411  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	W1129 09:23:27.756892  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	I1129 09:23:28.323156  240275 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 09:23:28.327426  240275 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 09:23:28.327455  240275 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 09:23:28.341412  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 09:23:28.662860  240275 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:23:28.662997  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:23:28.663084  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-528769 minikube.k8s.io/updated_at=2025_11_29T09_23_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=default-k8s-diff-port-528769 minikube.k8s.io/primary=true
	I1129 09:23:28.861054  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:23:28.861131  240275 ops.go:34] apiserver oom_adj: -16
	I1129 09:23:29.361242  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:23:29.861859  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:23:30.361053  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:23:30.861089  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:23:31.361566  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:23:31.861793  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:23:32.361140  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:23:32.861299  240275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:23:33.115156  240275 kubeadm.go:1114] duration metric: took 4.45220329s to wait for elevateKubeSystemPrivileges
	I1129 09:23:33.115188  240275 kubeadm.go:403] duration metric: took 22.580917755s to StartCluster
	I1129 09:23:33.115206  240275 settings.go:142] acquiring lock: {Name:mk44917d1324740eeda65bf3aa312ad1561d3ed4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:23:33.115270  240275 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:23:33.117074  240275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/kubeconfig: {Name:mk3c09eb9158ba85342a695b6ac4b1a5f69e1b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:23:33.117924  240275 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 09:23:33.117947  240275 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:23:33.118012  240275 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-528769"
	I1129 09:23:33.117916  240275 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:23:33.118026  240275 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-528769"
	I1129 09:23:33.118050  240275 host.go:66] Checking if "default-k8s-diff-port-528769" exists ...
	I1129 09:23:33.118715  240275 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-528769 --format={{.State.Status}}
	I1129 09:23:33.119150  240275 config.go:182] Loaded profile config "default-k8s-diff-port-528769": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:23:33.119221  240275 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-528769"
	I1129 09:23:33.119240  240275 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-528769"
	I1129 09:23:33.119495  240275 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-528769 --format={{.State.Status}}
	I1129 09:23:33.122336  240275 out.go:179] * Verifying Kubernetes components...
	I1129 09:23:33.128673  240275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:23:33.161726  240275 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1129 09:23:29.757288  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	W1129 09:23:32.256523  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	I1129 09:23:33.164798  240275 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:23:33.164823  240275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:23:33.164885  240275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-528769
	I1129 09:23:33.166088  240275 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-528769"
	I1129 09:23:33.166122  240275 host.go:66] Checking if "default-k8s-diff-port-528769" exists ...
	I1129 09:23:33.175912  240275 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-528769 --format={{.State.Status}}
	I1129 09:23:33.199281  240275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/default-k8s-diff-port-528769/id_rsa Username:docker}
	I1129 09:23:33.222515  240275 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:23:33.222541  240275 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:23:33.222609  240275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-528769
	I1129 09:23:33.258168  240275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/default-k8s-diff-port-528769/id_rsa Username:docker}
	I1129 09:23:33.508766  240275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:23:33.629665  240275 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:23:33.629919  240275 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 09:23:33.631148  240275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:23:34.323306  240275 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1129 09:23:34.326062  240275 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-528769" to be "Ready" ...
	I1129 09:23:34.369831  240275 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1129 09:23:34.372744  240275 addons.go:530] duration metric: took 1.254792097s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 09:23:34.827262  240275 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-528769" context rescaled to 1 replicas
	W1129 09:23:36.336846  240275 node_ready.go:57] node "default-k8s-diff-port-528769" has "Ready":"False" status (will retry)
	W1129 09:23:34.256791  236407 node_ready.go:57] node "embed-certs-086358" has "Ready":"False" status (will retry)
	I1129 09:23:36.757719  236407 node_ready.go:49] node "embed-certs-086358" is "Ready"
	I1129 09:23:36.757751  236407 node_ready.go:38] duration metric: took 40.504683915s for node "embed-certs-086358" to be "Ready" ...
	I1129 09:23:36.757767  236407 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:23:36.757839  236407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:23:36.772152  236407 api_server.go:72] duration metric: took 42.439130579s to wait for apiserver process to appear ...
	I1129 09:23:36.772179  236407 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:23:36.772198  236407 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:23:36.781673  236407 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 09:23:36.782789  236407 api_server.go:141] control plane version: v1.34.1
	I1129 09:23:36.782819  236407 api_server.go:131] duration metric: took 10.632744ms to wait for apiserver health ...
	I1129 09:23:36.782828  236407 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:23:36.786672  236407 system_pods.go:59] 8 kube-system pods found
	I1129 09:23:36.786708  236407 system_pods.go:61] "coredns-66bc5c9577-2fhrs" [224b9d8a-65f2-44ed-b5b3-9b8f39ac6854] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:23:36.786751  236407 system_pods.go:61] "etcd-embed-certs-086358" [674a8f81-94b4-41ce-94c2-90cb52b67601] Running
	I1129 09:23:36.786759  236407 system_pods.go:61] "kindnet-2x7dg" [4945072e-8049-437d-8593-8f1de5316222] Running
	I1129 09:23:36.786763  236407 system_pods.go:61] "kube-apiserver-embed-certs-086358" [68dfb4c7-7463-4946-bbef-d3002539fd2d] Running
	I1129 09:23:36.786767  236407 system_pods.go:61] "kube-controller-manager-embed-certs-086358" [c5085977-e0b5-48d7-8a13-40e11f6c63e0] Running
	I1129 09:23:36.786788  236407 system_pods.go:61] "kube-proxy-2qzkl" [2def38f6-3e34-4e81-a66a-59f10b8fc3a0] Running
	I1129 09:23:36.786799  236407 system_pods.go:61] "kube-scheduler-embed-certs-086358" [f2afa9a4-1299-470a-a815-c0cf65b82307] Running
	I1129 09:23:36.786804  236407 system_pods.go:61] "storage-provisioner" [e08be393-d772-4606-bb5b-b754bee79505] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:23:36.786821  236407 system_pods.go:74] duration metric: took 3.977435ms to wait for pod list to return data ...
	I1129 09:23:36.786842  236407 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:23:36.796913  236407 default_sa.go:45] found service account: "default"
	I1129 09:23:36.796942  236407 default_sa.go:55] duration metric: took 10.093107ms for default service account to be created ...
	I1129 09:23:36.796953  236407 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:23:36.800120  236407 system_pods.go:86] 8 kube-system pods found
	I1129 09:23:36.800156  236407 system_pods.go:89] "coredns-66bc5c9577-2fhrs" [224b9d8a-65f2-44ed-b5b3-9b8f39ac6854] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:23:36.800163  236407 system_pods.go:89] "etcd-embed-certs-086358" [674a8f81-94b4-41ce-94c2-90cb52b67601] Running
	I1129 09:23:36.800169  236407 system_pods.go:89] "kindnet-2x7dg" [4945072e-8049-437d-8593-8f1de5316222] Running
	I1129 09:23:36.800176  236407 system_pods.go:89] "kube-apiserver-embed-certs-086358" [68dfb4c7-7463-4946-bbef-d3002539fd2d] Running
	I1129 09:23:36.800181  236407 system_pods.go:89] "kube-controller-manager-embed-certs-086358" [c5085977-e0b5-48d7-8a13-40e11f6c63e0] Running
	I1129 09:23:36.800185  236407 system_pods.go:89] "kube-proxy-2qzkl" [2def38f6-3e34-4e81-a66a-59f10b8fc3a0] Running
	I1129 09:23:36.800189  236407 system_pods.go:89] "kube-scheduler-embed-certs-086358" [f2afa9a4-1299-470a-a815-c0cf65b82307] Running
	I1129 09:23:36.800195  236407 system_pods.go:89] "storage-provisioner" [e08be393-d772-4606-bb5b-b754bee79505] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:23:36.800218  236407 retry.go:31] will retry after 244.931691ms: missing components: kube-dns
	I1129 09:23:37.049204  236407 system_pods.go:86] 8 kube-system pods found
	I1129 09:23:37.049240  236407 system_pods.go:89] "coredns-66bc5c9577-2fhrs" [224b9d8a-65f2-44ed-b5b3-9b8f39ac6854] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:23:37.049249  236407 system_pods.go:89] "etcd-embed-certs-086358" [674a8f81-94b4-41ce-94c2-90cb52b67601] Running
	I1129 09:23:37.049256  236407 system_pods.go:89] "kindnet-2x7dg" [4945072e-8049-437d-8593-8f1de5316222] Running
	I1129 09:23:37.049261  236407 system_pods.go:89] "kube-apiserver-embed-certs-086358" [68dfb4c7-7463-4946-bbef-d3002539fd2d] Running
	I1129 09:23:37.049268  236407 system_pods.go:89] "kube-controller-manager-embed-certs-086358" [c5085977-e0b5-48d7-8a13-40e11f6c63e0] Running
	I1129 09:23:37.049271  236407 system_pods.go:89] "kube-proxy-2qzkl" [2def38f6-3e34-4e81-a66a-59f10b8fc3a0] Running
	I1129 09:23:37.049275  236407 system_pods.go:89] "kube-scheduler-embed-certs-086358" [f2afa9a4-1299-470a-a815-c0cf65b82307] Running
	I1129 09:23:37.049281  236407 system_pods.go:89] "storage-provisioner" [e08be393-d772-4606-bb5b-b754bee79505] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:23:37.049299  236407 retry.go:31] will retry after 351.544334ms: missing components: kube-dns
	I1129 09:23:37.406631  236407 system_pods.go:86] 8 kube-system pods found
	I1129 09:23:37.406668  236407 system_pods.go:89] "coredns-66bc5c9577-2fhrs" [224b9d8a-65f2-44ed-b5b3-9b8f39ac6854] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:23:37.406676  236407 system_pods.go:89] "etcd-embed-certs-086358" [674a8f81-94b4-41ce-94c2-90cb52b67601] Running
	I1129 09:23:37.406684  236407 system_pods.go:89] "kindnet-2x7dg" [4945072e-8049-437d-8593-8f1de5316222] Running
	I1129 09:23:37.406688  236407 system_pods.go:89] "kube-apiserver-embed-certs-086358" [68dfb4c7-7463-4946-bbef-d3002539fd2d] Running
	I1129 09:23:37.406693  236407 system_pods.go:89] "kube-controller-manager-embed-certs-086358" [c5085977-e0b5-48d7-8a13-40e11f6c63e0] Running
	I1129 09:23:37.406697  236407 system_pods.go:89] "kube-proxy-2qzkl" [2def38f6-3e34-4e81-a66a-59f10b8fc3a0] Running
	I1129 09:23:37.406701  236407 system_pods.go:89] "kube-scheduler-embed-certs-086358" [f2afa9a4-1299-470a-a815-c0cf65b82307] Running
	I1129 09:23:37.406708  236407 system_pods.go:89] "storage-provisioner" [e08be393-d772-4606-bb5b-b754bee79505] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:23:37.406727  236407 retry.go:31] will retry after 466.917085ms: missing components: kube-dns
	I1129 09:23:37.878651  236407 system_pods.go:86] 8 kube-system pods found
	I1129 09:23:37.878718  236407 system_pods.go:89] "coredns-66bc5c9577-2fhrs" [224b9d8a-65f2-44ed-b5b3-9b8f39ac6854] Running
	I1129 09:23:37.878732  236407 system_pods.go:89] "etcd-embed-certs-086358" [674a8f81-94b4-41ce-94c2-90cb52b67601] Running
	I1129 09:23:37.878744  236407 system_pods.go:89] "kindnet-2x7dg" [4945072e-8049-437d-8593-8f1de5316222] Running
	I1129 09:23:37.878756  236407 system_pods.go:89] "kube-apiserver-embed-certs-086358" [68dfb4c7-7463-4946-bbef-d3002539fd2d] Running
	I1129 09:23:37.878766  236407 system_pods.go:89] "kube-controller-manager-embed-certs-086358" [c5085977-e0b5-48d7-8a13-40e11f6c63e0] Running
	I1129 09:23:37.878771  236407 system_pods.go:89] "kube-proxy-2qzkl" [2def38f6-3e34-4e81-a66a-59f10b8fc3a0] Running
	I1129 09:23:37.878788  236407 system_pods.go:89] "kube-scheduler-embed-certs-086358" [f2afa9a4-1299-470a-a815-c0cf65b82307] Running
	I1129 09:23:37.878797  236407 system_pods.go:89] "storage-provisioner" [e08be393-d772-4606-bb5b-b754bee79505] Running
	I1129 09:23:37.878806  236407 system_pods.go:126] duration metric: took 1.081846492s to wait for k8s-apps to be running ...
	I1129 09:23:37.878824  236407 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:23:37.878902  236407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:23:37.897082  236407 system_svc.go:56] duration metric: took 18.248356ms WaitForService to wait for kubelet
	I1129 09:23:37.897114  236407 kubeadm.go:587] duration metric: took 43.564102014s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:23:37.897147  236407 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:23:37.900752  236407 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 09:23:37.900800  236407 node_conditions.go:123] node cpu capacity is 2
	I1129 09:23:37.900815  236407 node_conditions.go:105] duration metric: took 3.661388ms to run NodePressure ...
	I1129 09:23:37.900828  236407 start.go:242] waiting for startup goroutines ...
	I1129 09:23:37.900835  236407 start.go:247] waiting for cluster config update ...
	I1129 09:23:37.900847  236407 start.go:256] writing updated cluster config ...
	I1129 09:23:37.901146  236407 ssh_runner.go:195] Run: rm -f paused
	I1129 09:23:37.905070  236407 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:23:37.909403  236407 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2fhrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:37.914613  236407 pod_ready.go:94] pod "coredns-66bc5c9577-2fhrs" is "Ready"
	I1129 09:23:37.914644  236407 pod_ready.go:86] duration metric: took 5.213323ms for pod "coredns-66bc5c9577-2fhrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:37.917366  236407 pod_ready.go:83] waiting for pod "etcd-embed-certs-086358" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:37.922734  236407 pod_ready.go:94] pod "etcd-embed-certs-086358" is "Ready"
	I1129 09:23:37.922764  236407 pod_ready.go:86] duration metric: took 5.371363ms for pod "etcd-embed-certs-086358" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:37.925356  236407 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-086358" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:37.935376  236407 pod_ready.go:94] pod "kube-apiserver-embed-certs-086358" is "Ready"
	I1129 09:23:37.935405  236407 pod_ready.go:86] duration metric: took 10.024019ms for pod "kube-apiserver-embed-certs-086358" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:37.938034  236407 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-086358" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:38.310011  236407 pod_ready.go:94] pod "kube-controller-manager-embed-certs-086358" is "Ready"
	I1129 09:23:38.310038  236407 pod_ready.go:86] duration metric: took 371.976112ms for pod "kube-controller-manager-embed-certs-086358" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:38.509764  236407 pod_ready.go:83] waiting for pod "kube-proxy-2qzkl" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:38.909143  236407 pod_ready.go:94] pod "kube-proxy-2qzkl" is "Ready"
	I1129 09:23:38.909223  236407 pod_ready.go:86] duration metric: took 399.433237ms for pod "kube-proxy-2qzkl" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:39.110447  236407 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-086358" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:39.510364  236407 pod_ready.go:94] pod "kube-scheduler-embed-certs-086358" is "Ready"
	I1129 09:23:39.510395  236407 pod_ready.go:86] duration metric: took 399.922019ms for pod "kube-scheduler-embed-certs-086358" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:23:39.510408  236407 pod_ready.go:40] duration metric: took 1.605298686s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:23:39.579367  236407 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 09:23:39.584594  236407 out.go:179] * Done! kubectl is now configured to use "embed-certs-086358" cluster and "default" namespace by default
	W1129 09:23:38.829404  240275 node_ready.go:57] node "default-k8s-diff-port-528769" has "Ready":"False" status (will retry)
	W1129 09:23:40.829873  240275 node_ready.go:57] node "default-k8s-diff-port-528769" has "Ready":"False" status (will retry)
	W1129 09:23:43.330432  240275 node_ready.go:57] node "default-k8s-diff-port-528769" has "Ready":"False" status (will retry)
	W1129 09:23:45.829486  240275 node_ready.go:57] node "default-k8s-diff-port-528769" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	79b2b865b7fe8       1611cd07b61d5       8 seconds ago        Running             busybox                   0                   efd8e18d67cf0       busybox                                      default
	142f1b95a243c       138784d87c9c5       14 seconds ago       Running             coredns                   0                   5369c6303bf8e       coredns-66bc5c9577-2fhrs                     kube-system
	71da9bf637f99       ba04bb24b9575       14 seconds ago       Running             storage-provisioner       0                   0efca620800d2       storage-provisioner                          kube-system
	463144a8348fe       b1a8c6f707935       55 seconds ago       Running             kindnet-cni               0                   aab4417da4c79       kindnet-2x7dg                                kube-system
	0221d25cfd4dd       05baa95f5142d       55 seconds ago       Running             kube-proxy                0                   2b3976be500f5       kube-proxy-2qzkl                             kube-system
	c0577342962bc       a1894772a478e       About a minute ago   Running             etcd                      0                   ce0ecdda9a07e       etcd-embed-certs-086358                      kube-system
	63d03d07ac0a1       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   8554cc3301ab4       kube-scheduler-embed-certs-086358            kube-system
	9a782a50e3036       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   1a4b8eacf5060       kube-apiserver-embed-certs-086358            kube-system
	593a51223ee9a       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   29fb789d52ff0       kube-controller-manager-embed-certs-086358   kube-system
	
	
	==> containerd <==
	Nov 29 09:23:37 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:37.166574472Z" level=info msg="CreateContainer within sandbox \"0efca620800d2c5f1427a9202f9a9b882e4fdd5e4a5d4926bc2000b1db598beb\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"71da9bf637f997fd41268b358e43d421edcd0b0f351938a5905ffb2acc33b21f\""
	Nov 29 09:23:37 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:37.167231231Z" level=info msg="StartContainer for \"71da9bf637f997fd41268b358e43d421edcd0b0f351938a5905ffb2acc33b21f\""
	Nov 29 09:23:37 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:37.168508744Z" level=info msg="connecting to shim 71da9bf637f997fd41268b358e43d421edcd0b0f351938a5905ffb2acc33b21f" address="unix:///run/containerd/s/5fa54d704a5f6ddf23b1dbe2a9a099dfe21b11fa7e715c58c837c2f9e9f8681a" protocol=ttrpc version=3
	Nov 29 09:23:37 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:37.175905235Z" level=info msg="Container 142f1b95a243cf67c1135910d74d40a419cf06ed0bf5077f6568ab892160c97a: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:23:37 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:37.192703047Z" level=info msg="CreateContainer within sandbox \"5369c6303bf8e2c5b80e7f9fdb8af50f09c0a14c9d3bfc7f532cf76fee6c4d3c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"142f1b95a243cf67c1135910d74d40a419cf06ed0bf5077f6568ab892160c97a\""
	Nov 29 09:23:37 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:37.195857289Z" level=info msg="StartContainer for \"142f1b95a243cf67c1135910d74d40a419cf06ed0bf5077f6568ab892160c97a\""
	Nov 29 09:23:37 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:37.198787465Z" level=info msg="connecting to shim 142f1b95a243cf67c1135910d74d40a419cf06ed0bf5077f6568ab892160c97a" address="unix:///run/containerd/s/9bbe579b1fde093cc80ae316fad875a7a9d8b9993ae392aab34218490d6f8471" protocol=ttrpc version=3
	Nov 29 09:23:37 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:37.262855492Z" level=info msg="StartContainer for \"71da9bf637f997fd41268b358e43d421edcd0b0f351938a5905ffb2acc33b21f\" returns successfully"
	Nov 29 09:23:37 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:37.288354927Z" level=info msg="StartContainer for \"142f1b95a243cf67c1135910d74d40a419cf06ed0bf5077f6568ab892160c97a\" returns successfully"
	Nov 29 09:23:40 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:40.129429707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:17a6629d-52f0-4e8d-8452-1bf975092ed9,Namespace:default,Attempt:0,}"
	Nov 29 09:23:40 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:40.201463845Z" level=info msg="connecting to shim efd8e18d67cf06c4cefcc26ca617e5fcc785c60802972c8c187040074b249962" address="unix:///run/containerd/s/ef4ffd0eb0b0e4bb117ff41f24dcf2c6602ce90ea44fd96fbb282017970f2120" namespace=k8s.io protocol=ttrpc version=3
	Nov 29 09:23:40 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:40.283088244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:17a6629d-52f0-4e8d-8452-1bf975092ed9,Namespace:default,Attempt:0,} returns sandbox id \"efd8e18d67cf06c4cefcc26ca617e5fcc785c60802972c8c187040074b249962\""
	Nov 29 09:23:40 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:40.288093041Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.414880141Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.416779303Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937188"
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.419297882Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.424319861Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.425283879Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.137109582s"
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.425431655Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.432164684Z" level=info msg="CreateContainer within sandbox \"efd8e18d67cf06c4cefcc26ca617e5fcc785c60802972c8c187040074b249962\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.444062183Z" level=info msg="Container 79b2b865b7fe86303d0af05fef1d8540a010aa143c17fa9f335a88f68da9b2c6: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.458364849Z" level=info msg="CreateContainer within sandbox \"efd8e18d67cf06c4cefcc26ca617e5fcc785c60802972c8c187040074b249962\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"79b2b865b7fe86303d0af05fef1d8540a010aa143c17fa9f335a88f68da9b2c6\""
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.459539166Z" level=info msg="StartContainer for \"79b2b865b7fe86303d0af05fef1d8540a010aa143c17fa9f335a88f68da9b2c6\""
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.461619982Z" level=info msg="connecting to shim 79b2b865b7fe86303d0af05fef1d8540a010aa143c17fa9f335a88f68da9b2c6" address="unix:///run/containerd/s/ef4ffd0eb0b0e4bb117ff41f24dcf2c6602ce90ea44fd96fbb282017970f2120" protocol=ttrpc version=3
	Nov 29 09:23:42 embed-certs-086358 containerd[759]: time="2025-11-29T09:23:42.540461652Z" level=info msg="StartContainer for \"79b2b865b7fe86303d0af05fef1d8540a010aa143c17fa9f335a88f68da9b2c6\" returns successfully"
	
	
	==> coredns [142f1b95a243cf67c1135910d74d40a419cf06ed0bf5077f6568ab892160c97a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48306 - 13485 "HINFO IN 6034152585137040996.4390996263943985383. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023323184s
	
	
	==> describe nodes <==
	Name:               embed-certs-086358
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-086358
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=embed-certs-086358
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_22_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:22:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-086358
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:23:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:23:51 +0000   Sat, 29 Nov 2025 09:22:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:23:51 +0000   Sat, 29 Nov 2025 09:22:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:23:51 +0000   Sat, 29 Nov 2025 09:22:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:23:51 +0000   Sat, 29 Nov 2025 09:23:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-086358
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                f920f567-c286-45f2-93bb-f2ebbdb3ee93
	  Boot ID:                    6647f078-4edd-40c5-9d0e-49eb5ed00bd7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-2fhrs                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     57s
	  kube-system                 etcd-embed-certs-086358                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-2x7dg                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-embed-certs-086358             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-embed-certs-086358    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-2qzkl                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-embed-certs-086358             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)  kubelet          Node embed-certs-086358 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)  kubelet          Node embed-certs-086358 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x7 over 70s)  kubelet          Node embed-certs-086358 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  70s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  62s                kubelet          Node embed-certs-086358 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s                kubelet          Node embed-certs-086358 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s                kubelet          Node embed-certs-086358 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-086358 event: Registered Node embed-certs-086358 in Controller
	  Normal   NodeReady                15s                kubelet          Node embed-certs-086358 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014634] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.570975] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032231] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767655] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.282538] kauditd_printk_skb: 36 callbacks suppressed
	[Nov29 08:39] FS-Cache: Duplicate cookie detected
	[  +0.000727] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001077] FS-Cache: O-cookie d=00000000b08097f7{9P.session} n=00000000a17ba85f
	[  +0.001074] FS-Cache: O-key=[10] '34323935323231393134'
	[  +0.000776] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000954] FS-Cache: N-cookie d=00000000b08097f7{9P.session} n=00000000534469ad
	[  +0.001092] FS-Cache: N-key=[10] '34323935323231393134'
	[Nov29 09:19] hrtimer: interrupt took 12545193 ns
	
	
	==> etcd [c0577342962bca3db58da726fcac889eec75133a917bc6e9cf1feb6a3f337e59] <==
	{"level":"warn","ts":"2025-11-29T09:22:43.972114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.035078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.054202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.073237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.094118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.118163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.135247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.152114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.170617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.233901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.264311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.340684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.390950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.429184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.490890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.525590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.552210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.590008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.641218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.667668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.710843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.755110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.784862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:44.831937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:45.024745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59654","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:23:51 up  1:06,  0 user,  load average: 3.55, 3.56, 3.00
	Linux embed-certs-086358 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [463144a8348fe09690fae6daaf1a23bd6db8686609b47d2764b6e39f5bbda974] <==
	I1129 09:22:56.303897       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:22:56.380200       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 09:22:56.380342       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:22:56.380356       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:22:56.380373       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:22:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:22:56.583621       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:22:56.583816       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:22:56.583907       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:22:56.584959       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1129 09:23:26.584013       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1129 09:23:26.585043       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1129 09:23:26.585088       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1129 09:23:26.585154       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1129 09:23:28.184990       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:23:28.185026       1 metrics.go:72] Registering metrics
	I1129 09:23:28.185099       1 controller.go:711] "Syncing nftables rules"
	I1129 09:23:36.588732       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:23:36.588785       1 main.go:301] handling current node
	I1129 09:23:46.584731       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:23:46.584778       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9a782a50e3036c97768d6ec56613adcf9c14b720a7b95396868f2c8ae21e2c1d] <==
	I1129 09:22:46.511053       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1129 09:22:46.531272       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 09:22:46.554136       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:22:46.557209       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 09:22:46.594835       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:22:46.649600       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:22:46.650001       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1129 09:22:46.650333       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:22:47.224327       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:22:47.230639       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:22:47.230665       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:22:48.078296       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:22:48.133739       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:22:48.278521       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:22:48.298249       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1129 09:22:48.300224       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:22:48.316424       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:22:48.677857       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:22:49.115019       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:22:49.144246       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:22:49.160587       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:22:54.218565       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:22:54.249510       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:22:54.431100       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:22:54.828750       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [593a51223ee9a2a228c68dbef6b88d64186dd580dacb1aa36709e7d873bea72b] <==
	I1129 09:22:54.065392       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:22:54.065622       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 09:22:54.065712       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 09:22:54.066072       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1129 09:22:54.067383       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:22:54.067908       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 09:22:54.072460       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 09:22:54.077386       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:22:54.099839       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:22:54.100947       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1129 09:22:54.121745       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 09:22:54.121858       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 09:22:54.121940       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-086358"
	I1129 09:22:54.121980       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1129 09:22:54.123013       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:22:54.123042       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 09:22:54.124672       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1129 09:22:54.124723       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1129 09:22:54.124762       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1129 09:22:54.124767       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1129 09:22:54.124772       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1129 09:22:54.129216       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 09:22:54.133714       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:22:54.173334       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-086358" podCIDRs=["10.244.0.0/24"]
	I1129 09:23:39.127957       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0221d25cfd4ddcdcc16f4f520608d24d9dfa2e0df4ef9c1eb5526108818141b0] <==
	I1129 09:22:56.150869       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:22:56.330433       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:22:56.432577       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:22:56.432641       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1129 09:22:56.432725       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:22:56.573866       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:22:56.574171       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:22:56.591892       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:22:56.592423       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:22:56.592872       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:22:56.594343       1 config.go:200] "Starting service config controller"
	I1129 09:22:56.594526       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:22:56.594646       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:22:56.594714       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:22:56.594813       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:22:56.594873       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:22:56.595595       1 config.go:309] "Starting node config controller"
	I1129 09:22:56.595701       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:22:56.595786       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:22:56.695895       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:22:56.696040       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 09:22:56.696414       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [63d03d07ac0a1758cd00c71c131868b3e936406ac3079afa609a554f2c6c1c6a] <==
	I1129 09:22:47.023097       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 09:22:47.025333       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 09:22:47.029442       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:22:47.029491       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1129 09:22:47.031332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:22:47.040271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 09:22:47.040317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:22:47.040351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:22:47.040391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:22:47.040423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:22:47.040455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:22:47.040487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:22:47.040520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:22:47.040551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:22:47.040581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:22:47.048426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:22:47.049000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:22:47.049189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:22:47.049625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:22:47.049892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:22:47.050401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 09:22:47.050588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:22:47.051799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1129 09:22:47.859215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1129 09:22:49.630532       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:22:50 embed-certs-086358 kubelet[1465]: I1129 09:22:50.451377    1465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-086358" podStartSLOduration=1.4513444500000001 podStartE2EDuration="1.45134445s" podCreationTimestamp="2025-11-29 09:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:22:50.44854847 +0000 UTC m=+1.382193162" watchObservedRunningTime="2025-11-29 09:22:50.45134445 +0000 UTC m=+1.384989282"
	Nov 29 09:22:50 embed-certs-086358 kubelet[1465]: I1129 09:22:50.451613    1465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-086358" podStartSLOduration=1.451594347 podStartE2EDuration="1.451594347s" podCreationTimestamp="2025-11-29 09:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:22:50.432170164 +0000 UTC m=+1.365814848" watchObservedRunningTime="2025-11-29 09:22:50.451594347 +0000 UTC m=+1.385239023"
	Nov 29 09:22:50 embed-certs-086358 kubelet[1465]: I1129 09:22:50.484347    1465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-086358" podStartSLOduration=1.484327763 podStartE2EDuration="1.484327763s" podCreationTimestamp="2025-11-29 09:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:22:50.466349628 +0000 UTC m=+1.399994312" watchObservedRunningTime="2025-11-29 09:22:50.484327763 +0000 UTC m=+1.417972447"
	Nov 29 09:22:50 embed-certs-086358 kubelet[1465]: I1129 09:22:50.517834    1465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-086358" podStartSLOduration=1.517811058 podStartE2EDuration="1.517811058s" podCreationTimestamp="2025-11-29 09:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:22:50.484908985 +0000 UTC m=+1.418553677" watchObservedRunningTime="2025-11-29 09:22:50.517811058 +0000 UTC m=+1.451455742"
	Nov 29 09:22:54 embed-certs-086358 kubelet[1465]: I1129 09:22:54.216867    1465 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 29 09:22:54 embed-certs-086358 kubelet[1465]: I1129 09:22:54.232611    1465 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 09:22:54 embed-certs-086358 kubelet[1465]: I1129 09:22:54.982583    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sznz\" (UniqueName: \"kubernetes.io/projected/4945072e-8049-437d-8593-8f1de5316222-kube-api-access-9sznz\") pod \"kindnet-2x7dg\" (UID: \"4945072e-8049-437d-8593-8f1de5316222\") " pod="kube-system/kindnet-2x7dg"
	Nov 29 09:22:54 embed-certs-086358 kubelet[1465]: I1129 09:22:54.982860    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jgzp\" (UniqueName: \"kubernetes.io/projected/2def38f6-3e34-4e81-a66a-59f10b8fc3a0-kube-api-access-9jgzp\") pod \"kube-proxy-2qzkl\" (UID: \"2def38f6-3e34-4e81-a66a-59f10b8fc3a0\") " pod="kube-system/kube-proxy-2qzkl"
	Nov 29 09:22:54 embed-certs-086358 kubelet[1465]: I1129 09:22:54.982900    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2def38f6-3e34-4e81-a66a-59f10b8fc3a0-kube-proxy\") pod \"kube-proxy-2qzkl\" (UID: \"2def38f6-3e34-4e81-a66a-59f10b8fc3a0\") " pod="kube-system/kube-proxy-2qzkl"
	Nov 29 09:22:54 embed-certs-086358 kubelet[1465]: I1129 09:22:54.983031    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2def38f6-3e34-4e81-a66a-59f10b8fc3a0-lib-modules\") pod \"kube-proxy-2qzkl\" (UID: \"2def38f6-3e34-4e81-a66a-59f10b8fc3a0\") " pod="kube-system/kube-proxy-2qzkl"
	Nov 29 09:22:54 embed-certs-086358 kubelet[1465]: I1129 09:22:54.983057    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4945072e-8049-437d-8593-8f1de5316222-xtables-lock\") pod \"kindnet-2x7dg\" (UID: \"4945072e-8049-437d-8593-8f1de5316222\") " pod="kube-system/kindnet-2x7dg"
	Nov 29 09:22:54 embed-certs-086358 kubelet[1465]: I1129 09:22:54.983267    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2def38f6-3e34-4e81-a66a-59f10b8fc3a0-xtables-lock\") pod \"kube-proxy-2qzkl\" (UID: \"2def38f6-3e34-4e81-a66a-59f10b8fc3a0\") " pod="kube-system/kube-proxy-2qzkl"
	Nov 29 09:22:54 embed-certs-086358 kubelet[1465]: I1129 09:22:54.983296    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4945072e-8049-437d-8593-8f1de5316222-cni-cfg\") pod \"kindnet-2x7dg\" (UID: \"4945072e-8049-437d-8593-8f1de5316222\") " pod="kube-system/kindnet-2x7dg"
	Nov 29 09:22:54 embed-certs-086358 kubelet[1465]: I1129 09:22:54.983529    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4945072e-8049-437d-8593-8f1de5316222-lib-modules\") pod \"kindnet-2x7dg\" (UID: \"4945072e-8049-437d-8593-8f1de5316222\") " pod="kube-system/kindnet-2x7dg"
	Nov 29 09:22:55 embed-certs-086358 kubelet[1465]: I1129 09:22:55.127627    1465 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 29 09:22:56 embed-certs-086358 kubelet[1465]: I1129 09:22:56.534637    1465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2x7dg" podStartSLOduration=2.534587893 podStartE2EDuration="2.534587893s" podCreationTimestamp="2025-11-29 09:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:22:56.503433088 +0000 UTC m=+7.437077797" watchObservedRunningTime="2025-11-29 09:22:56.534587893 +0000 UTC m=+7.468232577"
	Nov 29 09:22:57 embed-certs-086358 kubelet[1465]: I1129 09:22:57.348205    1465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2qzkl" podStartSLOduration=3.348185323 podStartE2EDuration="3.348185323s" podCreationTimestamp="2025-11-29 09:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:22:56.545519823 +0000 UTC m=+7.479164516" watchObservedRunningTime="2025-11-29 09:22:57.348185323 +0000 UTC m=+8.281829998"
	Nov 29 09:23:36 embed-certs-086358 kubelet[1465]: I1129 09:23:36.643565    1465 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 09:23:36 embed-certs-086358 kubelet[1465]: I1129 09:23:36.860330    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e08be393-d772-4606-bb5b-b754bee79505-tmp\") pod \"storage-provisioner\" (UID: \"e08be393-d772-4606-bb5b-b754bee79505\") " pod="kube-system/storage-provisioner"
	Nov 29 09:23:36 embed-certs-086358 kubelet[1465]: I1129 09:23:36.860375    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smzfk\" (UniqueName: \"kubernetes.io/projected/e08be393-d772-4606-bb5b-b754bee79505-kube-api-access-smzfk\") pod \"storage-provisioner\" (UID: \"e08be393-d772-4606-bb5b-b754bee79505\") " pod="kube-system/storage-provisioner"
	Nov 29 09:23:36 embed-certs-086358 kubelet[1465]: I1129 09:23:36.860400    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w8lz\" (UniqueName: \"kubernetes.io/projected/224b9d8a-65f2-44ed-b5b3-9b8f39ac6854-kube-api-access-8w8lz\") pod \"coredns-66bc5c9577-2fhrs\" (UID: \"224b9d8a-65f2-44ed-b5b3-9b8f39ac6854\") " pod="kube-system/coredns-66bc5c9577-2fhrs"
	Nov 29 09:23:36 embed-certs-086358 kubelet[1465]: I1129 09:23:36.860423    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/224b9d8a-65f2-44ed-b5b3-9b8f39ac6854-config-volume\") pod \"coredns-66bc5c9577-2fhrs\" (UID: \"224b9d8a-65f2-44ed-b5b3-9b8f39ac6854\") " pod="kube-system/coredns-66bc5c9577-2fhrs"
	Nov 29 09:23:37 embed-certs-086358 kubelet[1465]: I1129 09:23:37.611622    1465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2fhrs" podStartSLOduration=43.611586205 podStartE2EDuration="43.611586205s" podCreationTimestamp="2025-11-29 09:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:23:37.582263068 +0000 UTC m=+48.515907743" watchObservedRunningTime="2025-11-29 09:23:37.611586205 +0000 UTC m=+48.545230889"
	Nov 29 09:23:39 embed-certs-086358 kubelet[1465]: I1129 09:23:39.813344    1465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=43.813321959 podStartE2EDuration="43.813321959s" podCreationTimestamp="2025-11-29 09:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:23:37.650344944 +0000 UTC m=+48.583989628" watchObservedRunningTime="2025-11-29 09:23:39.813321959 +0000 UTC m=+50.746966635"
	Nov 29 09:23:39 embed-certs-086358 kubelet[1465]: I1129 09:23:39.986379    1465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jtht\" (UniqueName: \"kubernetes.io/projected/17a6629d-52f0-4e8d-8452-1bf975092ed9-kube-api-access-6jtht\") pod \"busybox\" (UID: \"17a6629d-52f0-4e8d-8452-1bf975092ed9\") " pod="default/busybox"
	
	
	==> storage-provisioner [71da9bf637f997fd41268b358e43d421edcd0b0f351938a5905ffb2acc33b21f] <==
	I1129 09:23:37.294734       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:23:37.311117       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:23:37.311214       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:23:37.316998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:37.339912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:23:37.340106       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:23:37.340398       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-086358_2e07c11f-7260-41e3-9e3b-daaadcf9b0d5!
	I1129 09:23:37.341957       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e0af4b5d-59f0-45a0-9470-87209f513e0b", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-086358_2e07c11f-7260-41e3-9e3b-daaadcf9b0d5 became leader
	W1129 09:23:37.360035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:37.364013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:23:37.441152       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-086358_2e07c11f-7260-41e3-9e3b-daaadcf9b0d5!
	W1129 09:23:39.367932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:39.373243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:41.376819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:41.381793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:43.385154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:43.389964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:45.393337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:45.401306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:47.404357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:47.410223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:49.414654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:49.423124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:51.430860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:23:51.436158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-086358 -n embed-certs-086358
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-086358 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (12.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (15.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-528769 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6ddeb490-d6e5-43be-98f2-27affe7aebb7] Pending
helpers_test.go:352: "busybox" [6ddeb490-d6e5-43be-98f2-27affe7aebb7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6ddeb490-d6e5-43be-98f2-27affe7aebb7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003964748s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-528769 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-528769
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-528769:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e595d7c5c45a8436c44a3896dd53e6671070e18173f8996ea9b54071adffb24",
	        "Created": "2025-11-29T09:23:02.612868484Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 240741,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:23:02.681046223Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/5e595d7c5c45a8436c44a3896dd53e6671070e18173f8996ea9b54071adffb24/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e595d7c5c45a8436c44a3896dd53e6671070e18173f8996ea9b54071adffb24/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e595d7c5c45a8436c44a3896dd53e6671070e18173f8996ea9b54071adffb24/hosts",
	        "LogPath": "/var/lib/docker/containers/5e595d7c5c45a8436c44a3896dd53e6671070e18173f8996ea9b54071adffb24/5e595d7c5c45a8436c44a3896dd53e6671070e18173f8996ea9b54071adffb24-json.log",
	        "Name": "/default-k8s-diff-port-528769",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-528769:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-528769",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e595d7c5c45a8436c44a3896dd53e6671070e18173f8996ea9b54071adffb24",
	                "LowerDir": "/var/lib/docker/overlay2/a71ddad71f36afe8b1808b74a527d6b54533293381c16b968f94e7b63152ecb5-init/diff:/var/lib/docker/overlay2/fc2ab0019906b90b3f033fa414f560878b73f7ff0ebdf77a0b554a40813009d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a71ddad71f36afe8b1808b74a527d6b54533293381c16b968f94e7b63152ecb5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a71ddad71f36afe8b1808b74a527d6b54533293381c16b968f94e7b63152ecb5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a71ddad71f36afe8b1808b74a527d6b54533293381c16b968f94e7b63152ecb5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-528769",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-528769/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-528769",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-528769",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-528769",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ffd92d0ff38c3b4b07b8e8b74e48c8942c2e717121ec7d84b95a2732d3950d33",
	            "SandboxKey": "/var/run/docker/netns/ffd92d0ff38c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-528769": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:77:f0:80:dc:92",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7ec62bb3cbffa18628e6a9381e8ce5140e33e49e6ef531efa158dc96bd8c1702",
	                    "EndpointID": "d572de689c337c0c98565dbca13982a09a3482492e95e266067d6ab26f58a4a1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-528769",
	                        "5e595d7c5c45"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-528769 -n default-k8s-diff-port-528769
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-528769 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-528769 logs -n 25: (1.586323734s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable metrics-server -p old-k8s-version-071895 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:20 UTC │
	│ stop    │ -p old-k8s-version-071895 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:21 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-071895 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ start   │ -p old-k8s-version-071895 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ addons  │ enable metrics-server -p no-preload-230403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ stop    │ -p no-preload-230403 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ addons  │ enable dashboard -p no-preload-230403 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ start   │ -p no-preload-230403 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:22 UTC │
	│ image   │ old-k8s-version-071895 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ pause   │ -p old-k8s-version-071895 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ unpause │ -p old-k8s-version-071895 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ delete  │ -p old-k8s-version-071895                                                                                                                                                                                                                           │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ delete  │ -p old-k8s-version-071895                                                                                                                                                                                                                           │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ start   │ -p embed-certs-086358 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-086358           │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:23 UTC │
	│ image   │ no-preload-230403 image list --format=json                                                                                                                                                                                                          │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ pause   │ -p no-preload-230403 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ unpause │ -p no-preload-230403 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ delete  │ -p no-preload-230403                                                                                                                                                                                                                                │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ delete  │ -p no-preload-230403                                                                                                                                                                                                                                │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ delete  │ -p disable-driver-mounts-267340                                                                                                                                                                                                                     │ disable-driver-mounts-267340 │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ start   │ -p default-k8s-diff-port-528769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-528769 │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:24 UTC │
	│ addons  │ enable metrics-server -p embed-certs-086358 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-086358           │ jenkins │ v1.37.0 │ 29 Nov 25 09:23 UTC │ 29 Nov 25 09:23 UTC │
	│ stop    │ -p embed-certs-086358 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-086358           │ jenkins │ v1.37.0 │ 29 Nov 25 09:23 UTC │ 29 Nov 25 09:24 UTC │
	│ addons  │ enable dashboard -p embed-certs-086358 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-086358           │ jenkins │ v1.37.0 │ 29 Nov 25 09:24 UTC │ 29 Nov 25 09:24 UTC │
	│ start   │ -p embed-certs-086358 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-086358           │ jenkins │ v1.37.0 │ 29 Nov 25 09:24 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:24:05
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:24:05.993370  244729 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:24:05.993538  244729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:24:05.993548  244729 out.go:374] Setting ErrFile to fd 2...
	I1129 09:24:05.993553  244729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:24:05.993831  244729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
	I1129 09:24:05.994259  244729 out.go:368] Setting JSON to false
	I1129 09:24:05.995321  244729 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3997,"bootTime":1764404249,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1129 09:24:05.995399  244729 start.go:143] virtualization:  
	I1129 09:24:06.000998  244729 out.go:179] * [embed-certs-086358] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 09:24:06.004905  244729 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:24:06.007980  244729 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:24:06.008344  244729 notify.go:221] Checking for updates...
	I1129 09:24:06.014127  244729 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:24:06.016991  244729 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube
	I1129 09:24:06.020026  244729 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 09:24:06.022976  244729 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:24:06.026528  244729 config.go:182] Loaded profile config "embed-certs-086358": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:24:06.027158  244729 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:24:06.061759  244729 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 09:24:06.061903  244729 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:24:06.128608  244729 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 09:24:06.11805945 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:24:06.128781  244729 docker.go:319] overlay module found
	I1129 09:24:06.132070  244729 out.go:179] * Using the docker driver based on existing profile
	I1129 09:24:06.134995  244729 start.go:309] selected driver: docker
	I1129 09:24:06.135017  244729 start.go:927] validating driver "docker" against &{Name:embed-certs-086358 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-086358 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:24:06.135157  244729 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:24:06.135891  244729 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:24:06.200324  244729 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 09:24:06.190472362 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:24:06.200751  244729 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:24:06.200793  244729 cni.go:84] Creating CNI manager for ""
	I1129 09:24:06.200851  244729 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:24:06.200893  244729 start.go:353] cluster config:
	{Name:embed-certs-086358 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-086358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:24:06.206026  244729 out.go:179] * Starting "embed-certs-086358" primary control-plane node in "embed-certs-086358" cluster
	I1129 09:24:06.208982  244729 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1129 09:24:06.212011  244729 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:24:06.214789  244729 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:24:06.214841  244729 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1129 09:24:06.214851  244729 cache.go:65] Caching tarball of preloaded images
	I1129 09:24:06.214874  244729 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:24:06.214939  244729 preload.go:238] Found /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1129 09:24:06.214949  244729 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1129 09:24:06.215066  244729 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/embed-certs-086358/config.json ...
	I1129 09:24:06.235146  244729 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:24:06.235166  244729 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:24:06.235189  244729 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:24:06.235223  244729 start.go:360] acquireMachinesLock for embed-certs-086358: {Name:mk1ba4acf87c15b8011d084245765891b3b67063 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:24:06.235289  244729 start.go:364] duration metric: took 48.805µs to acquireMachinesLock for "embed-certs-086358"
	I1129 09:24:06.235317  244729 start.go:96] Skipping create...Using existing machine configuration
	I1129 09:24:06.235322  244729 fix.go:54] fixHost starting: 
	I1129 09:24:06.235579  244729 cli_runner.go:164] Run: docker container inspect embed-certs-086358 --format={{.State.Status}}
	I1129 09:24:06.257939  244729 fix.go:112] recreateIfNeeded on embed-certs-086358: state=Stopped err=<nil>
	W1129 09:24:06.257970  244729 fix.go:138] unexpected machine state, will restart: <nil>
	W1129 09:24:03.830103  240275 node_ready.go:57] node "default-k8s-diff-port-528769" has "Ready":"False" status (will retry)
	W1129 09:24:05.832862  240275 node_ready.go:57] node "default-k8s-diff-port-528769" has "Ready":"False" status (will retry)
	I1129 09:24:06.261220  244729 out.go:252] * Restarting existing docker container for "embed-certs-086358" ...
	I1129 09:24:06.261332  244729 cli_runner.go:164] Run: docker start embed-certs-086358
	I1129 09:24:06.541847  244729 cli_runner.go:164] Run: docker container inspect embed-certs-086358 --format={{.State.Status}}
	I1129 09:24:06.562415  244729 kic.go:430] container "embed-certs-086358" state is running.
	I1129 09:24:06.562805  244729 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-086358
	I1129 09:24:06.583556  244729 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/embed-certs-086358/config.json ...
	I1129 09:24:06.583787  244729 machine.go:94] provisionDockerMachine start ...
	I1129 09:24:06.583857  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:06.609923  244729 main.go:143] libmachine: Using SSH client type: native
	I1129 09:24:06.610255  244729 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1129 09:24:06.610264  244729 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:24:06.611703  244729 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1129 09:24:09.764345  244729 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-086358
	
	I1129 09:24:09.764370  244729 ubuntu.go:182] provisioning hostname "embed-certs-086358"
	I1129 09:24:09.764466  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:09.782891  244729 main.go:143] libmachine: Using SSH client type: native
	I1129 09:24:09.783196  244729 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1129 09:24:09.783213  244729 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-086358 && echo "embed-certs-086358" | sudo tee /etc/hostname
	I1129 09:24:09.947120  244729 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-086358
	
	I1129 09:24:09.947221  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:09.967799  244729 main.go:143] libmachine: Using SSH client type: native
	I1129 09:24:09.968170  244729 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1129 09:24:09.968200  244729 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-086358' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-086358/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-086358' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:24:10.137289  244729 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:24:10.137317  244729 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-2317/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-2317/.minikube}
	I1129 09:24:10.137364  244729 ubuntu.go:190] setting up certificates
	I1129 09:24:10.137373  244729 provision.go:84] configureAuth start
	I1129 09:24:10.137451  244729 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-086358
	I1129 09:24:10.156811  244729 provision.go:143] copyHostCerts
	I1129 09:24:10.156893  244729 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem, removing ...
	I1129 09:24:10.156914  244729 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem
	I1129 09:24:10.156992  244729 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem (1082 bytes)
	I1129 09:24:10.157143  244729 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem, removing ...
	I1129 09:24:10.157157  244729 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem
	I1129 09:24:10.157189  244729 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem (1123 bytes)
	I1129 09:24:10.157263  244729 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem, removing ...
	I1129 09:24:10.157274  244729 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem
	I1129 09:24:10.157302  244729 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem (1679 bytes)
	I1129 09:24:10.157364  244729 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem org=jenkins.embed-certs-086358 san=[127.0.0.1 192.168.76.2 embed-certs-086358 localhost minikube]
	I1129 09:24:10.392816  244729 provision.go:177] copyRemoteCerts
	I1129 09:24:10.392885  244729 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:24:10.392932  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:10.411749  244729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/embed-certs-086358/id_rsa Username:docker}
	I1129 09:24:10.516552  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1129 09:24:10.535275  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 09:24:10.553707  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 09:24:10.571311  244729 provision.go:87] duration metric: took 433.909357ms to configureAuth
	I1129 09:24:10.571384  244729 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:24:10.571608  244729 config.go:182] Loaded profile config "embed-certs-086358": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:24:10.571622  244729 machine.go:97] duration metric: took 3.987827692s to provisionDockerMachine
	I1129 09:24:10.571632  244729 start.go:293] postStartSetup for "embed-certs-086358" (driver="docker")
	I1129 09:24:10.571642  244729 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:24:10.571692  244729 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:24:10.571746  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:10.588957  244729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/embed-certs-086358/id_rsa Username:docker}
	I1129 09:24:10.693146  244729 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:24:10.696470  244729 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:24:10.696501  244729 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:24:10.696512  244729 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-2317/.minikube/addons for local assets ...
	I1129 09:24:10.696587  244729 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-2317/.minikube/files for local assets ...
	I1129 09:24:10.696731  244729 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem -> 41372.pem in /etc/ssl/certs
	I1129 09:24:10.696884  244729 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:24:10.704588  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem --> /etc/ssl/certs/41372.pem (1708 bytes)
	I1129 09:24:10.723714  244729 start.go:296] duration metric: took 152.05018ms for postStartSetup
	I1129 09:24:10.723845  244729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:24:10.723919  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:10.741660  244729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/embed-certs-086358/id_rsa Username:docker}
	I1129 09:24:10.849779  244729 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:24:10.854839  244729 fix.go:56] duration metric: took 4.619510663s for fixHost
	I1129 09:24:10.854865  244729 start.go:83] releasing machines lock for "embed-certs-086358", held for 4.619566744s
	I1129 09:24:10.854943  244729 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-086358
	I1129 09:24:10.873290  244729 ssh_runner.go:195] Run: cat /version.json
	I1129 09:24:10.873347  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:10.873605  244729 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:24:10.873669  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:10.894020  244729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/embed-certs-086358/id_rsa Username:docker}
	I1129 09:24:10.907941  244729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/embed-certs-086358/id_rsa Username:docker}
	W1129 09:24:08.330645  240275 node_ready.go:57] node "default-k8s-diff-port-528769" has "Ready":"False" status (will retry)
	W1129 09:24:10.829719  240275 node_ready.go:57] node "default-k8s-diff-port-528769" has "Ready":"False" status (will retry)
	I1129 09:24:11.002316  244729 ssh_runner.go:195] Run: systemctl --version
	I1129 09:24:11.097017  244729 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:24:11.101719  244729 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:24:11.101848  244729 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:24:11.110157  244729 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 09:24:11.110185  244729 start.go:496] detecting cgroup driver to use...
	I1129 09:24:11.110219  244729 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 09:24:11.110278  244729 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1129 09:24:11.131051  244729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1129 09:24:11.145820  244729 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:24:11.145885  244729 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:24:11.162124  244729 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:24:11.176028  244729 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:24:11.313570  244729 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:24:11.472586  244729 docker.go:234] disabling docker service ...
	I1129 09:24:11.472737  244729 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:24:11.491220  244729 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:24:11.505817  244729 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:24:11.626910  244729 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:24:11.742092  244729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:24:11.756473  244729 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:24:11.774355  244729 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1129 09:24:11.783913  244729 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1129 09:24:11.803515  244729 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1129 09:24:11.803582  244729 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1129 09:24:11.813807  244729 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:24:11.823020  244729 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1129 09:24:11.833481  244729 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:24:11.844119  244729 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:24:11.853285  244729 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1129 09:24:11.862918  244729 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1129 09:24:11.872025  244729 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1129 09:24:11.881228  244729 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:24:11.888995  244729 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:24:11.897102  244729 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:24:12.016816  244729 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1129 09:24:12.169515  244729 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1129 09:24:12.169609  244729 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1129 09:24:12.173961  244729 start.go:564] Will wait 60s for crictl version
	I1129 09:24:12.174077  244729 ssh_runner.go:195] Run: which crictl
	I1129 09:24:12.178220  244729 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:24:12.210398  244729 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1129 09:24:12.210536  244729 ssh_runner.go:195] Run: containerd --version
	I1129 09:24:12.236042  244729 ssh_runner.go:195] Run: containerd --version
	I1129 09:24:12.270601  244729 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1129 09:24:12.273640  244729 cli_runner.go:164] Run: docker network inspect embed-certs-086358 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:24:12.291596  244729 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 09:24:12.295443  244729 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:24:12.305327  244729 kubeadm.go:884] updating cluster {Name:embed-certs-086358 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-086358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:24:12.305447  244729 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:24:12.305518  244729 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:24:12.332613  244729 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:24:12.332666  244729 containerd.go:534] Images already preloaded, skipping extraction
	I1129 09:24:12.332725  244729 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:24:12.359642  244729 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:24:12.359726  244729 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:24:12.359751  244729 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1129 09:24:12.359902  244729 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-086358 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-086358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:24:12.359994  244729 ssh_runner.go:195] Run: sudo crictl info
	I1129 09:24:12.388576  244729 cni.go:84] Creating CNI manager for ""
	I1129 09:24:12.388603  244729 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:24:12.388662  244729 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:24:12.388688  244729 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-086358 NodeName:embed-certs-086358 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:24:12.388820  244729 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-086358"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:24:12.388896  244729 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:24:12.397252  244729 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:24:12.397322  244729 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:24:12.404903  244729 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1129 09:24:12.417896  244729 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:24:12.430645  244729 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1129 09:24:12.443545  244729 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:24:12.447233  244729 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:24:12.457981  244729 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:24:12.569560  244729 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:24:12.588142  244729 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/embed-certs-086358 for IP: 192.168.76.2
	I1129 09:24:12.588218  244729 certs.go:195] generating shared ca certs ...
	I1129 09:24:12.588249  244729 certs.go:227] acquiring lock for ca certs: {Name:mke655c14945a8520f2f9de36531df923afb2bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:24:12.588417  244729 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-2317/.minikube/ca.key
	I1129 09:24:12.588513  244729 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.key
	I1129 09:24:12.588541  244729 certs.go:257] generating profile certs ...
	I1129 09:24:12.588754  244729 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/embed-certs-086358/client.key
	I1129 09:24:12.588864  244729 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/embed-certs-086358/apiserver.key.d6dcf241
	I1129 09:24:12.588937  244729 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/embed-certs-086358/proxy-client.key
	I1129 09:24:12.589079  244729 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137.pem (1338 bytes)
	W1129 09:24:12.589145  244729 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137_empty.pem, impossibly tiny 0 bytes
	I1129 09:24:12.589174  244729 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:24:12.589231  244729 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem (1082 bytes)
	I1129 09:24:12.589289  244729 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:24:12.589341  244729 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem (1679 bytes)
	I1129 09:24:12.589426  244729 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem (1708 bytes)
	I1129 09:24:12.590092  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:24:12.616180  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1129 09:24:12.635243  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:24:12.653833  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:24:12.673023  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/embed-certs-086358/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1129 09:24:12.697188  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/embed-certs-086358/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:24:12.715703  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/embed-certs-086358/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:24:12.734522  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/embed-certs-086358/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:24:12.760361  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137.pem --> /usr/share/ca-certificates/4137.pem (1338 bytes)
	I1129 09:24:12.798536  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem --> /usr/share/ca-certificates/41372.pem (1708 bytes)
	I1129 09:24:12.824657  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:24:12.850589  244729 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:24:12.867341  244729 ssh_runner.go:195] Run: openssl version
	I1129 09:24:12.874558  244729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4137.pem && ln -fs /usr/share/ca-certificates/4137.pem /etc/ssl/certs/4137.pem"
	I1129 09:24:12.883348  244729 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4137.pem
	I1129 09:24:12.887127  244729 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:36 /usr/share/ca-certificates/4137.pem
	I1129 09:24:12.887240  244729 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4137.pem
	I1129 09:24:12.935609  244729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4137.pem /etc/ssl/certs/51391683.0"
	I1129 09:24:12.944163  244729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41372.pem && ln -fs /usr/share/ca-certificates/41372.pem /etc/ssl/certs/41372.pem"
	I1129 09:24:12.953810  244729 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41372.pem
	I1129 09:24:12.958540  244729 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:36 /usr/share/ca-certificates/41372.pem
	I1129 09:24:12.958607  244729 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41372.pem
	I1129 09:24:13.000925  244729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41372.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:24:13.010359  244729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:24:13.019426  244729 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:24:13.023411  244729 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:24:13.023493  244729 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:24:13.065818  244729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:24:13.074138  244729 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:24:13.078259  244729 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 09:24:13.126892  244729 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 09:24:13.169645  244729 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 09:24:13.225339  244729 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 09:24:13.285692  244729 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 09:24:13.350935  244729 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 09:24:13.449811  244729 kubeadm.go:401] StartCluster: {Name:embed-certs-086358 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-086358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:24:13.449943  244729 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1129 09:24:13.450090  244729 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:24:13.517134  244729 cri.go:89] found id: "142f1b95a243cf67c1135910d74d40a419cf06ed0bf5077f6568ab892160c97a"
	I1129 09:24:13.517167  244729 cri.go:89] found id: "71da9bf637f997fd41268b358e43d421edcd0b0f351938a5905ffb2acc33b21f"
	I1129 09:24:13.517181  244729 cri.go:89] found id: "463144a8348fe09690fae6daaf1a23bd6db8686609b47d2764b6e39f5bbda974"
	I1129 09:24:13.517185  244729 cri.go:89] found id: "0221d25cfd4ddcdcc16f4f520608d24d9dfa2e0df4ef9c1eb5526108818141b0"
	I1129 09:24:13.517215  244729 cri.go:89] found id: "c0577342962bca3db58da726fcac889eec75133a917bc6e9cf1feb6a3f337e59"
	I1129 09:24:13.517225  244729 cri.go:89] found id: "63d03d07ac0a1758cd00c71c131868b3e936406ac3079afa609a554f2c6c1c6a"
	I1129 09:24:13.517229  244729 cri.go:89] found id: "9a782a50e3036c97768d6ec56613adcf9c14b720a7b95396868f2c8ae21e2c1d"
	I1129 09:24:13.517251  244729 cri.go:89] found id: "593a51223ee9a2a228c68dbef6b88d64186dd580dacb1aa36709e7d873bea72b"
	I1129 09:24:13.517254  244729 cri.go:89] found id: ""
	I1129 09:24:13.517337  244729 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1129 09:24:13.541068  244729 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"25b3e76fc4e0d09d34c49e38801b038101d09094796a22c2a75e6c00ab809383","pid":870,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25b3e76fc4e0d09d34c49e38801b038101d09094796a22c2a75e6c00ab809383","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25b3e76fc4e0d09d34c49e38801b038101d09094796a22c2a75e6c00ab809383/rootfs","created":"2025-11-29T09:24:13.410790953Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"25b3e76fc4e0d09d34c49e38801b038101d09094796a22c2a75e6c00ab809383","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-086358_c7ec7c736ca272174da91dd89bf4beb7","io.kubernetes.cri.san
dbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-embed-certs-086358","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c7ec7c736ca272174da91dd89bf4beb7"},"owner":"root"},{"ociVersion":"1.2.1","id":"88204bb283762f3f1353f6f44172530cdf3fe8f277f40ae904313fce13cb1ace","pid":903,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88204bb283762f3f1353f6f44172530cdf3fe8f277f40ae904313fce13cb1ace","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88204bb283762f3f1353f6f44172530cdf3fe8f277f40ae904313fce13cb1ace/rootfs","created":"2025-11-29T09:24:13.430873559Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"88204bb283762f3f1353f6f44172530cdf3fe8f277f40ae904313fce13cb1a
ce","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-086358_1b636d1bccc4c9706d219cde67be2f6e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-embed-certs-086358","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1b636d1bccc4c9706d219cde67be2f6e"},"owner":"root"},{"ociVersion":"1.2.1","id":"f6a18b03e65003116c301eb4c1573f173a0d23aee020ad5fee165960260e4916","pid":936,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6a18b03e65003116c301eb4c1573f173a0d23aee020ad5fee165960260e4916","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6a18b03e65003116c301eb4c1573f173a0d23aee020ad5fee165960260e4916/rootfs","created":"2025-11-29T09:24:13.496374964Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-
cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"f6a18b03e65003116c301eb4c1573f173a0d23aee020ad5fee165960260e4916","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-086358_ad583e0080dbc35d38398d9c570ec954","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-embed-certs-086358","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ad583e0080dbc35d38398d9c570ec954"},"owner":"root"}]
	I1129 09:24:13.541285  244729 cri.go:126] list returned 3 containers
	I1129 09:24:13.541310  244729 cri.go:129] container: {ID:25b3e76fc4e0d09d34c49e38801b038101d09094796a22c2a75e6c00ab809383 Status:running}
	I1129 09:24:13.541360  244729 cri.go:131] skipping 25b3e76fc4e0d09d34c49e38801b038101d09094796a22c2a75e6c00ab809383 - not in ps
	I1129 09:24:13.541371  244729 cri.go:129] container: {ID:88204bb283762f3f1353f6f44172530cdf3fe8f277f40ae904313fce13cb1ace Status:created}
	I1129 09:24:13.541380  244729 cri.go:131] skipping 88204bb283762f3f1353f6f44172530cdf3fe8f277f40ae904313fce13cb1ace - not in ps
	I1129 09:24:13.541392  244729 cri.go:129] container: {ID:f6a18b03e65003116c301eb4c1573f173a0d23aee020ad5fee165960260e4916 Status:created}
	I1129 09:24:13.541455  244729 cri.go:131] skipping f6a18b03e65003116c301eb4c1573f173a0d23aee020ad5fee165960260e4916 - not in ps
	I1129 09:24:13.541594  244729 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:24:13.556666  244729 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 09:24:13.556690  244729 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 09:24:13.556796  244729 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 09:24:13.567202  244729 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:24:13.567831  244729 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-086358" does not appear in /home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:24:13.568114  244729 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-2317/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-086358" cluster setting kubeconfig missing "embed-certs-086358" context setting]
	I1129 09:24:13.568667  244729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/kubeconfig: {Name:mk3c09eb9158ba85342a695b6ac4b1a5f69e1b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:24:13.570198  244729 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 09:24:13.593413  244729 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1129 09:24:13.593457  244729 kubeadm.go:602] duration metric: took 36.760991ms to restartPrimaryControlPlane
	I1129 09:24:13.593484  244729 kubeadm.go:403] duration metric: took 143.695012ms to StartCluster
	I1129 09:24:13.593508  244729 settings.go:142] acquiring lock: {Name:mk44917d1324740eeda65bf3aa312ad1561d3ed4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:24:13.593642  244729 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:24:13.595164  244729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/kubeconfig: {Name:mk3c09eb9158ba85342a695b6ac4b1a5f69e1b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:24:13.595836  244729 config.go:182] Loaded profile config "embed-certs-086358": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:24:13.595888  244729 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:24:13.595948  244729 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:24:13.596022  244729 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-086358"
	I1129 09:24:13.596043  244729 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-086358"
	W1129 09:24:13.596049  244729 addons.go:248] addon storage-provisioner should already be in state true
	I1129 09:24:13.596072  244729 host.go:66] Checking if "embed-certs-086358" exists ...
	I1129 09:24:13.596575  244729 cli_runner.go:164] Run: docker container inspect embed-certs-086358 --format={{.State.Status}}
	I1129 09:24:13.597267  244729 addons.go:70] Setting default-storageclass=true in profile "embed-certs-086358"
	I1129 09:24:13.597301  244729 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-086358"
	I1129 09:24:13.597423  244729 addons.go:70] Setting metrics-server=true in profile "embed-certs-086358"
	I1129 09:24:13.597439  244729 addons.go:239] Setting addon metrics-server=true in "embed-certs-086358"
	W1129 09:24:13.597446  244729 addons.go:248] addon metrics-server should already be in state true
	I1129 09:24:13.597468  244729 host.go:66] Checking if "embed-certs-086358" exists ...
	I1129 09:24:13.597600  244729 cli_runner.go:164] Run: docker container inspect embed-certs-086358 --format={{.State.Status}}
	I1129 09:24:13.597909  244729 cli_runner.go:164] Run: docker container inspect embed-certs-086358 --format={{.State.Status}}
	I1129 09:24:13.600895  244729 addons.go:70] Setting dashboard=true in profile "embed-certs-086358"
	I1129 09:24:13.600924  244729 addons.go:239] Setting addon dashboard=true in "embed-certs-086358"
	W1129 09:24:13.601128  244729 addons.go:248] addon dashboard should already be in state true
	I1129 09:24:13.601165  244729 host.go:66] Checking if "embed-certs-086358" exists ...
	I1129 09:24:13.608138  244729 out.go:179] * Verifying Kubernetes components...
	I1129 09:24:13.610194  244729 cli_runner.go:164] Run: docker container inspect embed-certs-086358 --format={{.State.Status}}
	I1129 09:24:13.611896  244729 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:24:13.640493  244729 addons.go:239] Setting addon default-storageclass=true in "embed-certs-086358"
	W1129 09:24:13.640567  244729 addons.go:248] addon default-storageclass should already be in state true
	I1129 09:24:13.640607  244729 host.go:66] Checking if "embed-certs-086358" exists ...
	I1129 09:24:13.641154  244729 cli_runner.go:164] Run: docker container inspect embed-certs-086358 --format={{.State.Status}}
	I1129 09:24:13.691005  244729 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:24:13.694839  244729 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:24:13.694861  244729 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:24:13.694961  244729 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:24:13.694976  244729 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:24:13.695032  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:13.695063  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:13.712096  244729 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1129 09:24:13.712110  244729 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1129 09:24:13.716114  244729 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 09:24:13.717933  244729 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1129 09:24:13.717955  244729 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1129 09:24:13.718025  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:13.724509  244729 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 09:24:13.724545  244729 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 09:24:13.724758  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:13.761058  244729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/embed-certs-086358/id_rsa Username:docker}
	I1129 09:24:13.768010  244729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/embed-certs-086358/id_rsa Username:docker}
	I1129 09:24:13.788833  244729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/embed-certs-086358/id_rsa Username:docker}
	I1129 09:24:13.797023  244729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/embed-certs-086358/id_rsa Username:docker}
	I1129 09:24:14.040589  244729 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:24:14.170216  244729 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:24:14.179858  244729 node_ready.go:35] waiting up to 6m0s for node "embed-certs-086358" to be "Ready" ...
	I1129 09:24:14.357642  244729 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1129 09:24:14.357711  244729 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1129 09:24:14.373863  244729 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:24:14.589631  244729 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1129 09:24:14.589699  244729 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1129 09:24:14.788141  244729 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1129 09:24:14.788208  244729 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1129 09:24:14.894948  244729 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 09:24:14.895013  244729 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 09:24:15.020436  244729 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1129 09:24:15.175046  244729 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 09:24:15.175122  244729 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 09:24:15.298748  244729 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 09:24:15.298775  244729 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 09:24:15.384364  244729 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 09:24:15.384390  244729 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 09:24:15.441830  244729 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 09:24:15.441904  244729 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 09:24:15.507101  244729 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 09:24:15.507184  244729 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 09:24:15.566346  244729 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 09:24:15.566375  244729 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 09:24:15.601213  244729 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 09:24:15.601242  244729 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 09:24:15.630607  244729 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 09:24:15.630633  244729 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 09:24:15.667943  244729 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1129 09:24:13.330519  240275 node_ready.go:57] node "default-k8s-diff-port-528769" has "Ready":"False" status (will retry)
	I1129 09:24:13.837584  240275 node_ready.go:49] node "default-k8s-diff-port-528769" is "Ready"
	I1129 09:24:13.837625  240275 node_ready.go:38] duration metric: took 39.511473274s for node "default-k8s-diff-port-528769" to be "Ready" ...
	I1129 09:24:13.837641  240275 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:24:13.837703  240275 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:24:13.888608  240275 api_server.go:72] duration metric: took 40.77055457s to wait for apiserver process to appear ...
	I1129 09:24:13.888657  240275 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:24:13.888684  240275 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1129 09:24:13.902197  240275 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1129 09:24:13.911385  240275 api_server.go:141] control plane version: v1.34.1
	I1129 09:24:13.911472  240275 api_server.go:131] duration metric: took 22.805748ms to wait for apiserver health ...
	I1129 09:24:13.911498  240275 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:24:13.932369  240275 system_pods.go:59] 8 kube-system pods found
	I1129 09:24:13.932411  240275 system_pods.go:61] "coredns-66bc5c9577-ctldr" [93f75ca1-8d71-403e-800c-4e8dfdcdecd7] Pending
	I1129 09:24:13.932418  240275 system_pods.go:61] "etcd-default-k8s-diff-port-528769" [71ce5ce8-1e99-4a37-a8d6-6e431a9bb7f0] Running
	I1129 09:24:13.932422  240275 system_pods.go:61] "kindnet-kbqpv" [a2e00f40-c25d-4a2c-bac7-625ebd0f84de] Running
	I1129 09:24:13.932427  240275 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-528769" [d0d8dd1a-0031-4f91-b707-a269ba65d0cb] Running
	I1129 09:24:13.932431  240275 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-528769" [afc764fa-530b-4a51-af87-d3800da90c3f] Running
	I1129 09:24:13.932434  240275 system_pods.go:61] "kube-proxy-2gqpj" [9e27282e-db8e-430f-84db-c3ee57d5ff85] Running
	I1129 09:24:13.932438  240275 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-528769" [1e369541-195e-4df1-9527-732b37ad7172] Running
	I1129 09:24:13.932443  240275 system_pods.go:61] "storage-provisioner" [a5ab4c77-abf4-473f-aca7-608c3f1aac39] Pending
	I1129 09:24:13.932449  240275 system_pods.go:74] duration metric: took 20.916105ms to wait for pod list to return data ...
	I1129 09:24:13.932459  240275 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:24:13.941567  240275 default_sa.go:45] found service account: "default"
	I1129 09:24:13.941598  240275 default_sa.go:55] duration metric: took 9.132838ms for default service account to be created ...
	I1129 09:24:13.941608  240275 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:24:13.965924  240275 system_pods.go:86] 8 kube-system pods found
	I1129 09:24:13.965964  240275 system_pods.go:89] "coredns-66bc5c9577-ctldr" [93f75ca1-8d71-403e-800c-4e8dfdcdecd7] Pending
	I1129 09:24:13.965971  240275 system_pods.go:89] "etcd-default-k8s-diff-port-528769" [71ce5ce8-1e99-4a37-a8d6-6e431a9bb7f0] Running
	I1129 09:24:13.965979  240275 system_pods.go:89] "kindnet-kbqpv" [a2e00f40-c25d-4a2c-bac7-625ebd0f84de] Running
	I1129 09:24:13.965984  240275 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-528769" [d0d8dd1a-0031-4f91-b707-a269ba65d0cb] Running
	I1129 09:24:13.965989  240275 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-528769" [afc764fa-530b-4a51-af87-d3800da90c3f] Running
	I1129 09:24:13.965993  240275 system_pods.go:89] "kube-proxy-2gqpj" [9e27282e-db8e-430f-84db-c3ee57d5ff85] Running
	I1129 09:24:13.965998  240275 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-528769" [1e369541-195e-4df1-9527-732b37ad7172] Running
	I1129 09:24:13.966040  240275 system_pods.go:89] "storage-provisioner" [a5ab4c77-abf4-473f-aca7-608c3f1aac39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:24:13.966064  240275 retry.go:31] will retry after 311.571586ms: missing components: kube-dns
	I1129 09:24:14.282202  240275 system_pods.go:86] 8 kube-system pods found
	I1129 09:24:14.282243  240275 system_pods.go:89] "coredns-66bc5c9577-ctldr" [93f75ca1-8d71-403e-800c-4e8dfdcdecd7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:24:14.282252  240275 system_pods.go:89] "etcd-default-k8s-diff-port-528769" [71ce5ce8-1e99-4a37-a8d6-6e431a9bb7f0] Running
	I1129 09:24:14.282259  240275 system_pods.go:89] "kindnet-kbqpv" [a2e00f40-c25d-4a2c-bac7-625ebd0f84de] Running
	I1129 09:24:14.282263  240275 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-528769" [d0d8dd1a-0031-4f91-b707-a269ba65d0cb] Running
	I1129 09:24:14.282268  240275 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-528769" [afc764fa-530b-4a51-af87-d3800da90c3f] Running
	I1129 09:24:14.282272  240275 system_pods.go:89] "kube-proxy-2gqpj" [9e27282e-db8e-430f-84db-c3ee57d5ff85] Running
	I1129 09:24:14.282276  240275 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-528769" [1e369541-195e-4df1-9527-732b37ad7172] Running
	I1129 09:24:14.282283  240275 system_pods.go:89] "storage-provisioner" [a5ab4c77-abf4-473f-aca7-608c3f1aac39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:24:14.282298  240275 retry.go:31] will retry after 347.295337ms: missing components: kube-dns
	I1129 09:24:14.634494  240275 system_pods.go:86] 8 kube-system pods found
	I1129 09:24:14.634533  240275 system_pods.go:89] "coredns-66bc5c9577-ctldr" [93f75ca1-8d71-403e-800c-4e8dfdcdecd7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:24:14.634541  240275 system_pods.go:89] "etcd-default-k8s-diff-port-528769" [71ce5ce8-1e99-4a37-a8d6-6e431a9bb7f0] Running
	I1129 09:24:14.634548  240275 system_pods.go:89] "kindnet-kbqpv" [a2e00f40-c25d-4a2c-bac7-625ebd0f84de] Running
	I1129 09:24:14.634553  240275 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-528769" [d0d8dd1a-0031-4f91-b707-a269ba65d0cb] Running
	I1129 09:24:14.634565  240275 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-528769" [afc764fa-530b-4a51-af87-d3800da90c3f] Running
	I1129 09:24:14.634572  240275 system_pods.go:89] "kube-proxy-2gqpj" [9e27282e-db8e-430f-84db-c3ee57d5ff85] Running
	I1129 09:24:14.634576  240275 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-528769" [1e369541-195e-4df1-9527-732b37ad7172] Running
	I1129 09:24:14.634584  240275 system_pods.go:89] "storage-provisioner" [a5ab4c77-abf4-473f-aca7-608c3f1aac39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:24:14.634604  240275 retry.go:31] will retry after 330.852195ms: missing components: kube-dns
	I1129 09:24:14.984363  240275 system_pods.go:86] 8 kube-system pods found
	I1129 09:24:14.984400  240275 system_pods.go:89] "coredns-66bc5c9577-ctldr" [93f75ca1-8d71-403e-800c-4e8dfdcdecd7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:24:14.984408  240275 system_pods.go:89] "etcd-default-k8s-diff-port-528769" [71ce5ce8-1e99-4a37-a8d6-6e431a9bb7f0] Running
	I1129 09:24:14.984415  240275 system_pods.go:89] "kindnet-kbqpv" [a2e00f40-c25d-4a2c-bac7-625ebd0f84de] Running
	I1129 09:24:14.984419  240275 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-528769" [d0d8dd1a-0031-4f91-b707-a269ba65d0cb] Running
	I1129 09:24:14.984423  240275 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-528769" [afc764fa-530b-4a51-af87-d3800da90c3f] Running
	I1129 09:24:14.984428  240275 system_pods.go:89] "kube-proxy-2gqpj" [9e27282e-db8e-430f-84db-c3ee57d5ff85] Running
	I1129 09:24:14.984431  240275 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-528769" [1e369541-195e-4df1-9527-732b37ad7172] Running
	I1129 09:24:14.984437  240275 system_pods.go:89] "storage-provisioner" [a5ab4c77-abf4-473f-aca7-608c3f1aac39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:24:14.984445  240275 system_pods.go:126] duration metric: took 1.042830835s to wait for k8s-apps to be running ...
	I1129 09:24:14.984453  240275 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:24:14.984507  240275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:24:15.034493  240275 system_svc.go:56] duration metric: took 50.029542ms WaitForService to wait for kubelet
	I1129 09:24:15.034527  240275 kubeadm.go:587] duration metric: took 41.91648463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:24:15.034584  240275 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:24:15.084196  240275 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 09:24:15.084229  240275 node_conditions.go:123] node cpu capacity is 2
	I1129 09:24:15.084242  240275 node_conditions.go:105] duration metric: took 49.644572ms to run NodePressure ...
	I1129 09:24:15.084277  240275 start.go:242] waiting for startup goroutines ...
	I1129 09:24:15.084293  240275 start.go:247] waiting for cluster config update ...
	I1129 09:24:15.084306  240275 start.go:256] writing updated cluster config ...
	I1129 09:24:15.084653  240275 ssh_runner.go:195] Run: rm -f paused
	I1129 09:24:15.088604  240275 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:24:15.111106  240275 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ctldr" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:15.127162  240275 pod_ready.go:94] pod "coredns-66bc5c9577-ctldr" is "Ready"
	I1129 09:24:15.127191  240275 pod_ready.go:86] duration metric: took 16.054824ms for pod "coredns-66bc5c9577-ctldr" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:15.134974  240275 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-528769" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:15.144596  240275 pod_ready.go:94] pod "etcd-default-k8s-diff-port-528769" is "Ready"
	I1129 09:24:15.144711  240275 pod_ready.go:86] duration metric: took 9.703328ms for pod "etcd-default-k8s-diff-port-528769" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:15.149891  240275 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-528769" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:15.156220  240275 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-528769" is "Ready"
	I1129 09:24:15.156248  240275 pod_ready.go:86] duration metric: took 6.330466ms for pod "kube-apiserver-default-k8s-diff-port-528769" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:15.162695  240275 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-528769" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:15.493140  240275 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-528769" is "Ready"
	I1129 09:24:15.493167  240275 pod_ready.go:86] duration metric: took 330.448997ms for pod "kube-controller-manager-default-k8s-diff-port-528769" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:15.693783  240275 pod_ready.go:83] waiting for pod "kube-proxy-2gqpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:16.093282  240275 pod_ready.go:94] pod "kube-proxy-2gqpj" is "Ready"
	I1129 09:24:16.093314  240275 pod_ready.go:86] duration metric: took 399.498541ms for pod "kube-proxy-2gqpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:16.293472  240275 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-528769" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:16.693529  240275 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-528769" is "Ready"
	I1129 09:24:16.693558  240275 pod_ready.go:86] duration metric: took 400.058282ms for pod "kube-scheduler-default-k8s-diff-port-528769" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:16.693572  240275 pod_ready.go:40] duration metric: took 1.604882772s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:24:16.803967  240275 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 09:24:16.807986  240275 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-528769" cluster and "default" namespace by default
	I1129 09:24:19.089120  244729 node_ready.go:49] node "embed-certs-086358" is "Ready"
	I1129 09:24:19.089148  244729 node_ready.go:38] duration metric: took 4.909218102s for node "embed-certs-086358" to be "Ready" ...
	I1129 09:24:19.089161  244729 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:24:19.089220  244729 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:24:19.357422  244729 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.187168109s)
	I1129 09:24:21.764211  244729 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.3903087s)
	I1129 09:24:21.814241  244729 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.793761819s)
	I1129 09:24:21.814277  244729 addons.go:495] Verifying addon metrics-server=true in "embed-certs-086358"
	I1129 09:24:21.814383  244729 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.146399725s)
	I1129 09:24:21.814573  244729 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.725340975s)
	I1129 09:24:21.814593  244729 api_server.go:72] duration metric: took 8.217827537s to wait for apiserver process to appear ...
	I1129 09:24:21.814599  244729 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:24:21.814624  244729 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:24:21.817762  244729 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-086358 addons enable metrics-server
	
	I1129 09:24:21.820786  244729 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1129 09:24:21.823248  244729 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:24:21.823281  244729 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:24:21.824543  244729 addons.go:530] duration metric: took 8.228593764s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1129 09:24:22.314854  244729 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:24:22.323101  244729 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 09:24:22.324154  244729 api_server.go:141] control plane version: v1.34.1
	I1129 09:24:22.324182  244729 api_server.go:131] duration metric: took 509.576989ms to wait for apiserver health ...
	I1129 09:24:22.324193  244729 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:24:22.328151  244729 system_pods.go:59] 9 kube-system pods found
	I1129 09:24:22.328189  244729 system_pods.go:61] "coredns-66bc5c9577-2fhrs" [224b9d8a-65f2-44ed-b5b3-9b8f39ac6854] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:24:22.328205  244729 system_pods.go:61] "etcd-embed-certs-086358" [674a8f81-94b4-41ce-94c2-90cb52b67601] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:24:22.328212  244729 system_pods.go:61] "kindnet-2x7dg" [4945072e-8049-437d-8593-8f1de5316222] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 09:24:22.328219  244729 system_pods.go:61] "kube-apiserver-embed-certs-086358" [68dfb4c7-7463-4946-bbef-d3002539fd2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:24:22.328234  244729 system_pods.go:61] "kube-controller-manager-embed-certs-086358" [c5085977-e0b5-48d7-8a13-40e11f6c63e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:24:22.328248  244729 system_pods.go:61] "kube-proxy-2qzkl" [2def38f6-3e34-4e81-a66a-59f10b8fc3a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 09:24:22.328284  244729 system_pods.go:61] "kube-scheduler-embed-certs-086358" [f2afa9a4-1299-470a-a815-c0cf65b82307] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:24:22.328296  244729 system_pods.go:61] "metrics-server-746fcd58dc-dc5c4" [51467193-8be4-44e0-9cf2-e54613662115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 09:24:22.328302  244729 system_pods.go:61] "storage-provisioner" [e08be393-d772-4606-bb5b-b754bee79505] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:24:22.328307  244729 system_pods.go:74] duration metric: took 4.108882ms to wait for pod list to return data ...
	I1129 09:24:22.328317  244729 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:24:22.332684  244729 default_sa.go:45] found service account: "default"
	I1129 09:24:22.332713  244729 default_sa.go:55] duration metric: took 4.388999ms for default service account to be created ...
	I1129 09:24:22.332726  244729 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:24:22.337849  244729 system_pods.go:86] 9 kube-system pods found
	I1129 09:24:22.337882  244729 system_pods.go:89] "coredns-66bc5c9577-2fhrs" [224b9d8a-65f2-44ed-b5b3-9b8f39ac6854] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:24:22.337891  244729 system_pods.go:89] "etcd-embed-certs-086358" [674a8f81-94b4-41ce-94c2-90cb52b67601] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:24:22.337900  244729 system_pods.go:89] "kindnet-2x7dg" [4945072e-8049-437d-8593-8f1de5316222] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 09:24:22.337907  244729 system_pods.go:89] "kube-apiserver-embed-certs-086358" [68dfb4c7-7463-4946-bbef-d3002539fd2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:24:22.337914  244729 system_pods.go:89] "kube-controller-manager-embed-certs-086358" [c5085977-e0b5-48d7-8a13-40e11f6c63e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:24:22.337926  244729 system_pods.go:89] "kube-proxy-2qzkl" [2def38f6-3e34-4e81-a66a-59f10b8fc3a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 09:24:22.337932  244729 system_pods.go:89] "kube-scheduler-embed-certs-086358" [f2afa9a4-1299-470a-a815-c0cf65b82307] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:24:22.337941  244729 system_pods.go:89] "metrics-server-746fcd58dc-dc5c4" [51467193-8be4-44e0-9cf2-e54613662115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 09:24:22.337953  244729 system_pods.go:89] "storage-provisioner" [e08be393-d772-4606-bb5b-b754bee79505] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:24:22.337960  244729 system_pods.go:126] duration metric: took 5.228585ms to wait for k8s-apps to be running ...
	I1129 09:24:22.337973  244729 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:24:22.338032  244729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:24:22.357396  244729 system_svc.go:56] duration metric: took 19.414582ms WaitForService to wait for kubelet
	I1129 09:24:22.357423  244729 kubeadm.go:587] duration metric: took 8.760655597s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:24:22.357443  244729 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:24:22.366590  244729 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 09:24:22.366619  244729 node_conditions.go:123] node cpu capacity is 2
	I1129 09:24:22.366631  244729 node_conditions.go:105] duration metric: took 9.183037ms to run NodePressure ...
	I1129 09:24:22.366644  244729 start.go:242] waiting for startup goroutines ...
	I1129 09:24:22.366651  244729 start.go:247] waiting for cluster config update ...
	I1129 09:24:22.366662  244729 start.go:256] writing updated cluster config ...
	I1129 09:24:22.366952  244729 ssh_runner.go:195] Run: rm -f paused
	I1129 09:24:22.373153  244729 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:24:22.377624  244729 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2fhrs" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 09:24:24.383517  244729 pod_ready.go:104] pod "coredns-66bc5c9577-2fhrs" is not "Ready", error: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	e773c8e5f04fb       1611cd07b61d5       7 seconds ago        Running             busybox                   0                   693368e1372e3       busybox                                                default
	e065b1d7f32b7       ba04bb24b9575       12 seconds ago       Running             storage-provisioner       0                   d13d89d8d1bc8       storage-provisioner                                    kube-system
	259f0db699021       138784d87c9c5       12 seconds ago       Running             coredns                   0                   109507c0808f5       coredns-66bc5c9577-ctldr                               kube-system
	e2750e199427d       b1a8c6f707935       54 seconds ago       Running             kindnet-cni               0                   22e5dbdc1c4ab       kindnet-kbqpv                                          kube-system
	a502369f7017e       05baa95f5142d       54 seconds ago       Running             kube-proxy                0                   5aedc7d875abd       kube-proxy-2gqpj                                       kube-system
	d7c6a263dd131       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   62c838fc3d19d       kube-scheduler-default-k8s-diff-port-528769            kube-system
	661e12966a53b       a1894772a478e       About a minute ago   Running             etcd                      0                   f30610c5f6ba6       etcd-default-k8s-diff-port-528769                      kube-system
	61f9930bff256       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   f4a57753833cd       kube-controller-manager-default-k8s-diff-port-528769   kube-system
	364a9c3a9acf5       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   869067602218b       kube-apiserver-default-k8s-diff-port-528769            kube-system
	
	
	==> containerd <==
	Nov 29 09:24:14 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:14.592523114Z" level=info msg="Container e065b1d7f32b727bba244d99ffad2350a5a573b263c5c91f7e6f3bbec7332107: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:24:14 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:14.594868326Z" level=info msg="CreateContainer within sandbox \"109507c0808f5ba56dd13d3b720d16cf949975155ba9f6adb0b48dc124f075a3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"259f0db699021fc88082b326e2747e9d6786d34dec60278315b7f50b3db4dfcf\""
	Nov 29 09:24:14 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:14.598964523Z" level=info msg="StartContainer for \"259f0db699021fc88082b326e2747e9d6786d34dec60278315b7f50b3db4dfcf\""
	Nov 29 09:24:14 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:14.600474743Z" level=info msg="connecting to shim 259f0db699021fc88082b326e2747e9d6786d34dec60278315b7f50b3db4dfcf" address="unix:///run/containerd/s/d7a868913f8106d92478231f95eba089e4dbc448931fa10ad71a8cd3d9558781" protocol=ttrpc version=3
	Nov 29 09:24:14 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:14.611655995Z" level=info msg="CreateContainer within sandbox \"d13d89d8d1bc8e1a36f044f6bf215e74ded5b1bc02d69426db52207cde52479e\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"e065b1d7f32b727bba244d99ffad2350a5a573b263c5c91f7e6f3bbec7332107\""
	Nov 29 09:24:14 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:14.618257791Z" level=info msg="StartContainer for \"e065b1d7f32b727bba244d99ffad2350a5a573b263c5c91f7e6f3bbec7332107\""
	Nov 29 09:24:14 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:14.625051243Z" level=info msg="connecting to shim e065b1d7f32b727bba244d99ffad2350a5a573b263c5c91f7e6f3bbec7332107" address="unix:///run/containerd/s/01482df1ac8a0207fb368df13b205c2c412f71ebb57f8730f9afcee6067b4a96" protocol=ttrpc version=3
	Nov 29 09:24:14 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:14.801380189Z" level=info msg="StartContainer for \"e065b1d7f32b727bba244d99ffad2350a5a573b263c5c91f7e6f3bbec7332107\" returns successfully"
	Nov 29 09:24:14 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:14.801541724Z" level=info msg="StartContainer for \"259f0db699021fc88082b326e2747e9d6786d34dec60278315b7f50b3db4dfcf\" returns successfully"
	Nov 29 09:24:17 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:17.752842357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:6ddeb490-d6e5-43be-98f2-27affe7aebb7,Namespace:default,Attempt:0,}"
	Nov 29 09:24:17 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:17.822769937Z" level=info msg="connecting to shim 693368e1372e3e90621506b01fc5e13c5116ff341ef619df8ad9d266b094d64c" address="unix:///run/containerd/s/de6850fb4c1a170ebe372b18fe55fa90c6452b0d88d48cb612981df16a30a8b2" namespace=k8s.io protocol=ttrpc version=3
	Nov 29 09:24:17 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:17.925310832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:6ddeb490-d6e5-43be-98f2-27affe7aebb7,Namespace:default,Attempt:0,} returns sandbox id \"693368e1372e3e90621506b01fc5e13c5116ff341ef619df8ad9d266b094d64c\""
	Nov 29 09:24:17 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:17.927424634Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.060690578Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.064136235Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937190"
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.067409182Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.070851836Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.071466183Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.143995691s"
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.071940368Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.085600093Z" level=info msg="CreateContainer within sandbox \"693368e1372e3e90621506b01fc5e13c5116ff341ef619df8ad9d266b094d64c\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.108278218Z" level=info msg="Container e773c8e5f04fb91a6e3981b96716ea4ca35a32aa8ca1b216a4bf7161f766975a: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.120356542Z" level=info msg="CreateContainer within sandbox \"693368e1372e3e90621506b01fc5e13c5116ff341ef619df8ad9d266b094d64c\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"e773c8e5f04fb91a6e3981b96716ea4ca35a32aa8ca1b216a4bf7161f766975a\""
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.131654505Z" level=info msg="StartContainer for \"e773c8e5f04fb91a6e3981b96716ea4ca35a32aa8ca1b216a4bf7161f766975a\""
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.132736374Z" level=info msg="connecting to shim e773c8e5f04fb91a6e3981b96716ea4ca35a32aa8ca1b216a4bf7161f766975a" address="unix:///run/containerd/s/de6850fb4c1a170ebe372b18fe55fa90c6452b0d88d48cb612981df16a30a8b2" protocol=ttrpc version=3
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.288700775Z" level=info msg="StartContainer for \"e773c8e5f04fb91a6e3981b96716ea4ca35a32aa8ca1b216a4bf7161f766975a\" returns successfully"
	
	
	==> coredns [259f0db699021fc88082b326e2747e9d6786d34dec60278315b7f50b3db4dfcf] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33805 - 17985 "HINFO IN 7772944422344697396.6105932290377148216. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.041076126s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-528769
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-528769
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=default-k8s-diff-port-528769
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_23_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:23:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-528769
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:24:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:24:13 +0000   Sat, 29 Nov 2025 09:23:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:24:13 +0000   Sat, 29 Nov 2025 09:23:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:24:13 +0000   Sat, 29 Nov 2025 09:23:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:24:13 +0000   Sat, 29 Nov 2025 09:24:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-528769
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                12e356a0-2870-4f3c-9aee-edeacd128acb
	  Boot ID:                    6647f078-4edd-40c5-9d0e-49eb5ed00bd7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-ctldr                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-default-k8s-diff-port-528769                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         59s
	  kube-system                 kindnet-kbqpv                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-528769             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-528769    200m (10%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-2gqpj                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-528769             100m (5%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Warning  CgroupV1                 68s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  68s (x8 over 68s)  kubelet          Node default-k8s-diff-port-528769 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 68s)  kubelet          Node default-k8s-diff-port-528769 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x7 over 68s)  kubelet          Node default-k8s-diff-port-528769 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  68s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  60s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node default-k8s-diff-port-528769 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node default-k8s-diff-port-528769 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node default-k8s-diff-port-528769 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node default-k8s-diff-port-528769 event: Registered Node default-k8s-diff-port-528769 in Controller
	  Normal   NodeReady                14s                kubelet          Node default-k8s-diff-port-528769 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014634] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.570975] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032231] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767655] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.282538] kauditd_printk_skb: 36 callbacks suppressed
	[Nov29 08:39] FS-Cache: Duplicate cookie detected
	[  +0.000727] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001077] FS-Cache: O-cookie d=00000000b08097f7{9P.session} n=00000000a17ba85f
	[  +0.001074] FS-Cache: O-key=[10] '34323935323231393134'
	[  +0.000776] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000954] FS-Cache: N-cookie d=00000000b08097f7{9P.session} n=00000000534469ad
	[  +0.001092] FS-Cache: N-key=[10] '34323935323231393134'
	[Nov29 09:19] hrtimer: interrupt took 12545193 ns
	
	
	==> etcd [661e12966a53be87d5a2e6fb355185b6f2734aeca24c9e641c7fa53c37209721] <==
	{"level":"warn","ts":"2025-11-29T09:23:23.231019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.266191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.281780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.297924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.318248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.333998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.359223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.377805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.395837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.414266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.432564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.453415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.474224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.489032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.509856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.536856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.564247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.587528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.609973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.625275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.680984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.701783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.719986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.739287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.815769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40852","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:24:27 up  1:06,  0 user,  load average: 3.88, 3.59, 3.03
	Linux default-k8s-diff-port-528769 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e2750e199427d877c7b53679b11cd0c165939fd88df4d918d6d213be02be3cc4] <==
	I1129 09:23:33.584709       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:23:33.584996       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1129 09:23:33.585110       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:23:33.585122       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:23:33.585132       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:23:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:23:33.786289       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:23:33.786313       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:23:33.786323       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:23:33.787301       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1129 09:24:03.787031       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1129 09:24:03.787144       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1129 09:24:03.787277       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1129 09:24:03.787401       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1129 09:24:05.386875       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:24:05.386902       1 metrics.go:72] Registering metrics
	I1129 09:24:05.386966       1 controller.go:711] "Syncing nftables rules"
	I1129 09:24:13.795443       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:24:13.795489       1 main.go:301] handling current node
	I1129 09:24:23.787328       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:24:23.787369       1 main.go:301] handling current node
	
	
	==> kube-apiserver [364a9c3a9acf58cb422339f5fcc65e96d64f74407b8d012969344a32689511e6] <==
	E1129 09:23:24.921595       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1129 09:23:24.948570       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 09:23:24.948596       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:23:24.953622       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:23:24.954067       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:23:24.958791       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:23:25.105631       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:23:25.560754       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:23:25.568051       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:23:25.568365       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:23:26.350167       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:23:26.407460       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:23:26.473890       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:23:26.481755       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1129 09:23:26.483091       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:23:26.488901       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:23:26.761455       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:23:27.717216       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:23:27.734168       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:23:27.744612       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:23:31.862769       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1129 09:23:32.062627       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:23:32.875875       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:23:32.895989       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1129 09:24:26.325679       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:60292: use of closed network connection
	
	
	==> kube-controller-manager [61f9930bff256fbd985bb52ca9131ffb487a2f8940662219e3745714f98a0f4a] <==
	I1129 09:23:31.760579       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1129 09:23:31.760663       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 09:23:31.760863       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:23:31.760998       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 09:23:31.761041       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 09:23:31.762341       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1129 09:23:31.764732       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 09:23:31.765095       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:23:31.765196       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 09:23:31.765274       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 09:23:31.765366       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:23:31.767132       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:23:31.775733       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1129 09:23:31.786373       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:23:31.797109       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 09:23:31.802610       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:23:31.804909       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 09:23:31.807689       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 09:23:31.808036       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 09:23:31.809274       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1129 09:23:31.809281       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 09:23:31.809700       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1129 09:23:31.828833       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 09:23:31.830061       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:24:16.818584       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a502369f7017ed670f089183ad3806a271bb55615ca62f564dfdb79a3c5f3044] <==
	I1129 09:23:33.497963       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:23:33.587379       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:23:33.688359       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:23:33.688397       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1129 09:23:33.688477       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:23:33.719414       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:23:33.719469       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:23:33.727040       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:23:33.727707       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:23:33.727742       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:23:33.740158       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:23:33.740180       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:23:33.744670       1 config.go:309] "Starting node config controller"
	I1129 09:23:33.744697       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:23:33.744707       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:23:33.747371       1 config.go:200] "Starting service config controller"
	I1129 09:23:33.747396       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:23:33.747414       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:23:33.747420       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:23:33.842016       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:23:33.848005       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 09:23:33.848089       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [d7c6a263dd131373158be6025873d52b9ddefbf1920fd00eecb878044f55b34d] <==
	E1129 09:23:24.855631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 09:23:24.858984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:23:24.859624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:23:24.859710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:23:24.860763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:23:24.861026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:23:24.861498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:23:24.862421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:23:24.862632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:23:24.863278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:23:24.863457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:23:25.670278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:23:25.719109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:23:25.742101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:23:25.746636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:23:25.777484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:23:25.798268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:23:25.798806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:23:25.948244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1129 09:23:26.019546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:23:26.035660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:23:26.064213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 09:23:26.064469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:23:26.086332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1129 09:23:29.135242       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:23:31 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:31.925911    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2e00f40-c25d-4a2c-bac7-625ebd0f84de-lib-modules\") pod \"kindnet-kbqpv\" (UID: \"a2e00f40-c25d-4a2c-bac7-625ebd0f84de\") " pod="kube-system/kindnet-kbqpv"
	Nov 29 09:23:31 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:31.925955    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l9jr\" (UniqueName: \"kubernetes.io/projected/a2e00f40-c25d-4a2c-bac7-625ebd0f84de-kube-api-access-7l9jr\") pod \"kindnet-kbqpv\" (UID: \"a2e00f40-c25d-4a2c-bac7-625ebd0f84de\") " pod="kube-system/kindnet-kbqpv"
	Nov 29 09:23:31 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:31.925987    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a2e00f40-c25d-4a2c-bac7-625ebd0f84de-cni-cfg\") pod \"kindnet-kbqpv\" (UID: \"a2e00f40-c25d-4a2c-bac7-625ebd0f84de\") " pod="kube-system/kindnet-kbqpv"
	Nov 29 09:23:31 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:31.926005    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2e00f40-c25d-4a2c-bac7-625ebd0f84de-xtables-lock\") pod \"kindnet-kbqpv\" (UID: \"a2e00f40-c25d-4a2c-bac7-625ebd0f84de\") " pod="kube-system/kindnet-kbqpv"
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:32.027155    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e27282e-db8e-430f-84db-c3ee57d5ff85-xtables-lock\") pod \"kube-proxy-2gqpj\" (UID: \"9e27282e-db8e-430f-84db-c3ee57d5ff85\") " pod="kube-system/kube-proxy-2gqpj"
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:32.027228    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9e27282e-db8e-430f-84db-c3ee57d5ff85-kube-proxy\") pod \"kube-proxy-2gqpj\" (UID: \"9e27282e-db8e-430f-84db-c3ee57d5ff85\") " pod="kube-system/kube-proxy-2gqpj"
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:32.027248    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e27282e-db8e-430f-84db-c3ee57d5ff85-lib-modules\") pod \"kube-proxy-2gqpj\" (UID: \"9e27282e-db8e-430f-84db-c3ee57d5ff85\") " pod="kube-system/kube-proxy-2gqpj"
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:32.027268    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7bkd\" (UniqueName: \"kubernetes.io/projected/9e27282e-db8e-430f-84db-c3ee57d5ff85-kube-api-access-m7bkd\") pod \"kube-proxy-2gqpj\" (UID: \"9e27282e-db8e-430f-84db-c3ee57d5ff85\") " pod="kube-system/kube-proxy-2gqpj"
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: E1129 09:23:32.037123    1477 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: E1129 09:23:32.037170    1477 projected.go:196] Error preparing data for projected volume kube-api-access-7l9jr for pod kube-system/kindnet-kbqpv: configmap "kube-root-ca.crt" not found
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: E1129 09:23:32.037253    1477 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a2e00f40-c25d-4a2c-bac7-625ebd0f84de-kube-api-access-7l9jr podName:a2e00f40-c25d-4a2c-bac7-625ebd0f84de nodeName:}" failed. No retries permitted until 2025-11-29 09:23:32.537226381 +0000 UTC m=+4.984345994 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7l9jr" (UniqueName: "kubernetes.io/projected/a2e00f40-c25d-4a2c-bac7-625ebd0f84de-kube-api-access-7l9jr") pod "kindnet-kbqpv" (UID: "a2e00f40-c25d-4a2c-bac7-625ebd0f84de") : configmap "kube-root-ca.crt" not found
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: E1129 09:23:32.145008    1477 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: E1129 09:23:32.145221    1477 projected.go:196] Error preparing data for projected volume kube-api-access-m7bkd for pod kube-system/kube-proxy-2gqpj: configmap "kube-root-ca.crt" not found
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: E1129 09:23:32.145307    1477 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e27282e-db8e-430f-84db-c3ee57d5ff85-kube-api-access-m7bkd podName:9e27282e-db8e-430f-84db-c3ee57d5ff85 nodeName:}" failed. No retries permitted until 2025-11-29 09:23:32.64528531 +0000 UTC m=+5.092404915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m7bkd" (UniqueName: "kubernetes.io/projected/9e27282e-db8e-430f-84db-c3ee57d5ff85-kube-api-access-m7bkd") pod "kube-proxy-2gqpj" (UID: "9e27282e-db8e-430f-84db-c3ee57d5ff85") : configmap "kube-root-ca.crt" not found
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:32.632535    1477 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 29 09:23:33 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:33.842597    1477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2gqpj" podStartSLOduration=2.842578654 podStartE2EDuration="2.842578654s" podCreationTimestamp="2025-11-29 09:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:23:33.822712631 +0000 UTC m=+6.269832236" watchObservedRunningTime="2025-11-29 09:23:33.842578654 +0000 UTC m=+6.289698259"
	Nov 29 09:23:35 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:35.979843    1477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kbqpv" podStartSLOduration=4.979824582 podStartE2EDuration="4.979824582s" podCreationTimestamp="2025-11-29 09:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:23:33.843071079 +0000 UTC m=+6.290190692" watchObservedRunningTime="2025-11-29 09:23:35.979824582 +0000 UTC m=+8.426944178"
	Nov 29 09:24:13 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:24:13.802941    1477 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 09:24:14 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:24:14.077933    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dswlr\" (UniqueName: \"kubernetes.io/projected/a5ab4c77-abf4-473f-aca7-608c3f1aac39-kube-api-access-dswlr\") pod \"storage-provisioner\" (UID: \"a5ab4c77-abf4-473f-aca7-608c3f1aac39\") " pod="kube-system/storage-provisioner"
	Nov 29 09:24:14 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:24:14.077990    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a5ab4c77-abf4-473f-aca7-608c3f1aac39-tmp\") pod \"storage-provisioner\" (UID: \"a5ab4c77-abf4-473f-aca7-608c3f1aac39\") " pod="kube-system/storage-provisioner"
	Nov 29 09:24:14 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:24:14.078014    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93f75ca1-8d71-403e-800c-4e8dfdcdecd7-config-volume\") pod \"coredns-66bc5c9577-ctldr\" (UID: \"93f75ca1-8d71-403e-800c-4e8dfdcdecd7\") " pod="kube-system/coredns-66bc5c9577-ctldr"
	Nov 29 09:24:14 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:24:14.078040    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhzc4\" (UniqueName: \"kubernetes.io/projected/93f75ca1-8d71-403e-800c-4e8dfdcdecd7-kube-api-access-jhzc4\") pod \"coredns-66bc5c9577-ctldr\" (UID: \"93f75ca1-8d71-403e-800c-4e8dfdcdecd7\") " pod="kube-system/coredns-66bc5c9577-ctldr"
	Nov 29 09:24:14 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:24:14.968817    1477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ctldr" podStartSLOduration=42.968796959 podStartE2EDuration="42.968796959s" podCreationTimestamp="2025-11-29 09:23:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:24:14.967862455 +0000 UTC m=+47.414982060" watchObservedRunningTime="2025-11-29 09:24:14.968796959 +0000 UTC m=+47.415916556"
	Nov 29 09:24:17 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:24:17.125059    1477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=43.125040242 podStartE2EDuration="43.125040242s" podCreationTimestamp="2025-11-29 09:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:24:15.107322701 +0000 UTC m=+47.554442306" watchObservedRunningTime="2025-11-29 09:24:17.125040242 +0000 UTC m=+49.572159855"
	Nov 29 09:24:17 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:24:17.324372    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr65p\" (UniqueName: \"kubernetes.io/projected/6ddeb490-d6e5-43be-98f2-27affe7aebb7-kube-api-access-hr65p\") pod \"busybox\" (UID: \"6ddeb490-d6e5-43be-98f2-27affe7aebb7\") " pod="default/busybox"
	
	
	==> storage-provisioner [e065b1d7f32b727bba244d99ffad2350a5a573b263c5c91f7e6f3bbec7332107] <==
	I1129 09:24:14.853707       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:24:14.905923       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:24:14.909365       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:24:14.912998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:14.922741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:24:14.922972       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:24:14.928364       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-528769_5b0c6746-f6cc-4f27-bf66-1892fd10e14e!
	I1129 09:24:14.935354       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b4b92554-7b1e-407f-b41e-9009cdd5d295", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-528769_5b0c6746-f6cc-4f27-bf66-1892fd10e14e became leader
	W1129 09:24:14.946832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:14.972449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:24:15.034042       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-528769_5b0c6746-f6cc-4f27-bf66-1892fd10e14e!
	W1129 09:24:17.053099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:17.059979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:19.063274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:19.077415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:21.080865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:21.088326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:23.091976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:23.099748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:25.107647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:25.112735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:27.119633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:27.130335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-528769 -n default-k8s-diff-port-528769
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-528769 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-528769
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-528769:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e595d7c5c45a8436c44a3896dd53e6671070e18173f8996ea9b54071adffb24",
	        "Created": "2025-11-29T09:23:02.612868484Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 240741,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:23:02.681046223Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/5e595d7c5c45a8436c44a3896dd53e6671070e18173f8996ea9b54071adffb24/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e595d7c5c45a8436c44a3896dd53e6671070e18173f8996ea9b54071adffb24/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e595d7c5c45a8436c44a3896dd53e6671070e18173f8996ea9b54071adffb24/hosts",
	        "LogPath": "/var/lib/docker/containers/5e595d7c5c45a8436c44a3896dd53e6671070e18173f8996ea9b54071adffb24/5e595d7c5c45a8436c44a3896dd53e6671070e18173f8996ea9b54071adffb24-json.log",
	        "Name": "/default-k8s-diff-port-528769",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-528769:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-528769",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e595d7c5c45a8436c44a3896dd53e6671070e18173f8996ea9b54071adffb24",
	                "LowerDir": "/var/lib/docker/overlay2/a71ddad71f36afe8b1808b74a527d6b54533293381c16b968f94e7b63152ecb5-init/diff:/var/lib/docker/overlay2/fc2ab0019906b90b3f033fa414f560878b73f7ff0ebdf77a0b554a40813009d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a71ddad71f36afe8b1808b74a527d6b54533293381c16b968f94e7b63152ecb5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a71ddad71f36afe8b1808b74a527d6b54533293381c16b968f94e7b63152ecb5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a71ddad71f36afe8b1808b74a527d6b54533293381c16b968f94e7b63152ecb5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-528769",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-528769/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-528769",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-528769",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-528769",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ffd92d0ff38c3b4b07b8e8b74e48c8942c2e717121ec7d84b95a2732d3950d33",
	            "SandboxKey": "/var/run/docker/netns/ffd92d0ff38c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-528769": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:77:f0:80:dc:92",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7ec62bb3cbffa18628e6a9381e8ce5140e33e49e6ef531efa158dc96bd8c1702",
	                    "EndpointID": "d572de689c337c0c98565dbca13982a09a3482492e95e266067d6ab26f58a4a1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-528769",
	                        "5e595d7c5c45"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-528769 -n default-k8s-diff-port-528769
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-528769 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-528769 logs -n 25: (1.632867419s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable metrics-server -p old-k8s-version-071895 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:20 UTC │
	│ stop    │ -p old-k8s-version-071895 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:20 UTC │ 29 Nov 25 09:21 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-071895 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ start   │ -p old-k8s-version-071895 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ addons  │ enable metrics-server -p no-preload-230403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ stop    │ -p no-preload-230403 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ addons  │ enable dashboard -p no-preload-230403 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ start   │ -p no-preload-230403 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:22 UTC │
	│ image   │ old-k8s-version-071895 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ pause   │ -p old-k8s-version-071895 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ unpause │ -p old-k8s-version-071895 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ delete  │ -p old-k8s-version-071895                                                                                                                                                                                                                           │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ delete  │ -p old-k8s-version-071895                                                                                                                                                                                                                           │ old-k8s-version-071895       │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ start   │ -p embed-certs-086358 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-086358           │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:23 UTC │
	│ image   │ no-preload-230403 image list --format=json                                                                                                                                                                                                          │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ pause   │ -p no-preload-230403 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ unpause │ -p no-preload-230403 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ delete  │ -p no-preload-230403                                                                                                                                                                                                                                │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ delete  │ -p no-preload-230403                                                                                                                                                                                                                                │ no-preload-230403            │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ delete  │ -p disable-driver-mounts-267340                                                                                                                                                                                                                     │ disable-driver-mounts-267340 │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	│ start   │ -p default-k8s-diff-port-528769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-528769 │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:24 UTC │
	│ addons  │ enable metrics-server -p embed-certs-086358 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-086358           │ jenkins │ v1.37.0 │ 29 Nov 25 09:23 UTC │ 29 Nov 25 09:23 UTC │
	│ stop    │ -p embed-certs-086358 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-086358           │ jenkins │ v1.37.0 │ 29 Nov 25 09:23 UTC │ 29 Nov 25 09:24 UTC │
	│ addons  │ enable dashboard -p embed-certs-086358 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-086358           │ jenkins │ v1.37.0 │ 29 Nov 25 09:24 UTC │ 29 Nov 25 09:24 UTC │
	│ start   │ -p embed-certs-086358 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-086358           │ jenkins │ v1.37.0 │ 29 Nov 25 09:24 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:24:05
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:24:05.993370  244729 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:24:05.993538  244729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:24:05.993548  244729 out.go:374] Setting ErrFile to fd 2...
	I1129 09:24:05.993553  244729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:24:05.993831  244729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
	I1129 09:24:05.994259  244729 out.go:368] Setting JSON to false
	I1129 09:24:05.995321  244729 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3997,"bootTime":1764404249,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1129 09:24:05.995399  244729 start.go:143] virtualization:  
	I1129 09:24:06.000998  244729 out.go:179] * [embed-certs-086358] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 09:24:06.004905  244729 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:24:06.007980  244729 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:24:06.008344  244729 notify.go:221] Checking for updates...
	I1129 09:24:06.014127  244729 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:24:06.016991  244729 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube
	I1129 09:24:06.020026  244729 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 09:24:06.022976  244729 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:24:06.026528  244729 config.go:182] Loaded profile config "embed-certs-086358": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:24:06.027158  244729 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:24:06.061759  244729 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 09:24:06.061903  244729 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:24:06.128608  244729 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 09:24:06.11805945 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:24:06.128781  244729 docker.go:319] overlay module found
	I1129 09:24:06.132070  244729 out.go:179] * Using the docker driver based on existing profile
	I1129 09:24:06.134995  244729 start.go:309] selected driver: docker
	I1129 09:24:06.135017  244729 start.go:927] validating driver "docker" against &{Name:embed-certs-086358 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-086358 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:24:06.135157  244729 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:24:06.135891  244729 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:24:06.200324  244729 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 09:24:06.190472362 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:24:06.200751  244729 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:24:06.200793  244729 cni.go:84] Creating CNI manager for ""
	I1129 09:24:06.200851  244729 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:24:06.200893  244729 start.go:353] cluster config:
	{Name:embed-certs-086358 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-086358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:24:06.206026  244729 out.go:179] * Starting "embed-certs-086358" primary control-plane node in "embed-certs-086358" cluster
	I1129 09:24:06.208982  244729 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1129 09:24:06.212011  244729 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:24:06.214789  244729 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:24:06.214841  244729 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1129 09:24:06.214851  244729 cache.go:65] Caching tarball of preloaded images
	I1129 09:24:06.214874  244729 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:24:06.214939  244729 preload.go:238] Found /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1129 09:24:06.214949  244729 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1129 09:24:06.215066  244729 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/embed-certs-086358/config.json ...
	I1129 09:24:06.235146  244729 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:24:06.235166  244729 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:24:06.235189  244729 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:24:06.235223  244729 start.go:360] acquireMachinesLock for embed-certs-086358: {Name:mk1ba4acf87c15b8011d084245765891b3b67063 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:24:06.235289  244729 start.go:364] duration metric: took 48.805µs to acquireMachinesLock for "embed-certs-086358"
	I1129 09:24:06.235317  244729 start.go:96] Skipping create...Using existing machine configuration
	I1129 09:24:06.235322  244729 fix.go:54] fixHost starting: 
	I1129 09:24:06.235579  244729 cli_runner.go:164] Run: docker container inspect embed-certs-086358 --format={{.State.Status}}
	I1129 09:24:06.257939  244729 fix.go:112] recreateIfNeeded on embed-certs-086358: state=Stopped err=<nil>
	W1129 09:24:06.257970  244729 fix.go:138] unexpected machine state, will restart: <nil>
	W1129 09:24:03.830103  240275 node_ready.go:57] node "default-k8s-diff-port-528769" has "Ready":"False" status (will retry)
	W1129 09:24:05.832862  240275 node_ready.go:57] node "default-k8s-diff-port-528769" has "Ready":"False" status (will retry)
	I1129 09:24:06.261220  244729 out.go:252] * Restarting existing docker container for "embed-certs-086358" ...
	I1129 09:24:06.261332  244729 cli_runner.go:164] Run: docker start embed-certs-086358
	I1129 09:24:06.541847  244729 cli_runner.go:164] Run: docker container inspect embed-certs-086358 --format={{.State.Status}}
	I1129 09:24:06.562415  244729 kic.go:430] container "embed-certs-086358" state is running.
	I1129 09:24:06.562805  244729 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-086358
	I1129 09:24:06.583556  244729 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/embed-certs-086358/config.json ...
	I1129 09:24:06.583787  244729 machine.go:94] provisionDockerMachine start ...
	I1129 09:24:06.583857  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:06.609923  244729 main.go:143] libmachine: Using SSH client type: native
	I1129 09:24:06.610255  244729 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1129 09:24:06.610264  244729 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:24:06.611703  244729 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1129 09:24:09.764345  244729 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-086358
	
	I1129 09:24:09.764370  244729 ubuntu.go:182] provisioning hostname "embed-certs-086358"
	I1129 09:24:09.764466  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:09.782891  244729 main.go:143] libmachine: Using SSH client type: native
	I1129 09:24:09.783196  244729 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1129 09:24:09.783213  244729 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-086358 && echo "embed-certs-086358" | sudo tee /etc/hostname
	I1129 09:24:09.947120  244729 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-086358
	
	I1129 09:24:09.947221  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:09.967799  244729 main.go:143] libmachine: Using SSH client type: native
	I1129 09:24:09.968170  244729 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1129 09:24:09.968200  244729 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-086358' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-086358/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-086358' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:24:10.137289  244729 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:24:10.137317  244729 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-2317/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-2317/.minikube}
	I1129 09:24:10.137364  244729 ubuntu.go:190] setting up certificates
	I1129 09:24:10.137373  244729 provision.go:84] configureAuth start
	I1129 09:24:10.137451  244729 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-086358
	I1129 09:24:10.156811  244729 provision.go:143] copyHostCerts
	I1129 09:24:10.156893  244729 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem, removing ...
	I1129 09:24:10.156914  244729 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem
	I1129 09:24:10.156992  244729 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/ca.pem (1082 bytes)
	I1129 09:24:10.157143  244729 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem, removing ...
	I1129 09:24:10.157157  244729 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem
	I1129 09:24:10.157189  244729 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/cert.pem (1123 bytes)
	I1129 09:24:10.157263  244729 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem, removing ...
	I1129 09:24:10.157274  244729 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem
	I1129 09:24:10.157302  244729 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-2317/.minikube/key.pem (1679 bytes)
	I1129 09:24:10.157364  244729 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem org=jenkins.embed-certs-086358 san=[127.0.0.1 192.168.76.2 embed-certs-086358 localhost minikube]
	I1129 09:24:10.392816  244729 provision.go:177] copyRemoteCerts
	I1129 09:24:10.392885  244729 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:24:10.392932  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:10.411749  244729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/embed-certs-086358/id_rsa Username:docker}
	I1129 09:24:10.516552  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1129 09:24:10.535275  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 09:24:10.553707  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 09:24:10.571311  244729 provision.go:87] duration metric: took 433.909357ms to configureAuth
	I1129 09:24:10.571384  244729 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:24:10.571608  244729 config.go:182] Loaded profile config "embed-certs-086358": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:24:10.571622  244729 machine.go:97] duration metric: took 3.987827692s to provisionDockerMachine
	I1129 09:24:10.571632  244729 start.go:293] postStartSetup for "embed-certs-086358" (driver="docker")
	I1129 09:24:10.571642  244729 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:24:10.571692  244729 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:24:10.571746  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:10.588957  244729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/embed-certs-086358/id_rsa Username:docker}
	I1129 09:24:10.693146  244729 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:24:10.696470  244729 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:24:10.696501  244729 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:24:10.696512  244729 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-2317/.minikube/addons for local assets ...
	I1129 09:24:10.696587  244729 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-2317/.minikube/files for local assets ...
	I1129 09:24:10.696731  244729 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem -> 41372.pem in /etc/ssl/certs
	I1129 09:24:10.696884  244729 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:24:10.704588  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem --> /etc/ssl/certs/41372.pem (1708 bytes)
	I1129 09:24:10.723714  244729 start.go:296] duration metric: took 152.05018ms for postStartSetup
	I1129 09:24:10.723845  244729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:24:10.723919  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:10.741660  244729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/embed-certs-086358/id_rsa Username:docker}
	I1129 09:24:10.849779  244729 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:24:10.854839  244729 fix.go:56] duration metric: took 4.619510663s for fixHost
	I1129 09:24:10.854865  244729 start.go:83] releasing machines lock for "embed-certs-086358", held for 4.619566744s
	I1129 09:24:10.854943  244729 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-086358
	I1129 09:24:10.873290  244729 ssh_runner.go:195] Run: cat /version.json
	I1129 09:24:10.873347  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:10.873605  244729 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:24:10.873669  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:10.894020  244729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/embed-certs-086358/id_rsa Username:docker}
	I1129 09:24:10.907941  244729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/embed-certs-086358/id_rsa Username:docker}
	W1129 09:24:08.330645  240275 node_ready.go:57] node "default-k8s-diff-port-528769" has "Ready":"False" status (will retry)
	W1129 09:24:10.829719  240275 node_ready.go:57] node "default-k8s-diff-port-528769" has "Ready":"False" status (will retry)
	I1129 09:24:11.002316  244729 ssh_runner.go:195] Run: systemctl --version
	I1129 09:24:11.097017  244729 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:24:11.101719  244729 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:24:11.101848  244729 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:24:11.110157  244729 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 09:24:11.110185  244729 start.go:496] detecting cgroup driver to use...
	I1129 09:24:11.110219  244729 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 09:24:11.110278  244729 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1129 09:24:11.131051  244729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1129 09:24:11.145820  244729 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:24:11.145885  244729 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:24:11.162124  244729 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:24:11.176028  244729 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:24:11.313570  244729 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:24:11.472586  244729 docker.go:234] disabling docker service ...
	I1129 09:24:11.472737  244729 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:24:11.491220  244729 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:24:11.505817  244729 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:24:11.626910  244729 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:24:11.742092  244729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:24:11.756473  244729 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:24:11.774355  244729 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1129 09:24:11.783913  244729 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1129 09:24:11.803515  244729 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1129 09:24:11.803582  244729 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1129 09:24:11.813807  244729 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:24:11.823020  244729 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1129 09:24:11.833481  244729 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:24:11.844119  244729 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:24:11.853285  244729 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1129 09:24:11.862918  244729 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1129 09:24:11.872025  244729 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1129 09:24:11.881228  244729 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:24:11.888995  244729 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:24:11.897102  244729 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:24:12.016816  244729 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1129 09:24:12.169515  244729 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1129 09:24:12.169609  244729 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1129 09:24:12.173961  244729 start.go:564] Will wait 60s for crictl version
	I1129 09:24:12.174077  244729 ssh_runner.go:195] Run: which crictl
	I1129 09:24:12.178220  244729 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:24:12.210398  244729 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1129 09:24:12.210536  244729 ssh_runner.go:195] Run: containerd --version
	I1129 09:24:12.236042  244729 ssh_runner.go:195] Run: containerd --version
	I1129 09:24:12.270601  244729 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1129 09:24:12.273640  244729 cli_runner.go:164] Run: docker network inspect embed-certs-086358 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:24:12.291596  244729 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 09:24:12.295443  244729 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:24:12.305327  244729 kubeadm.go:884] updating cluster {Name:embed-certs-086358 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-086358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:24:12.305447  244729 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:24:12.305518  244729 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:24:12.332613  244729 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:24:12.332666  244729 containerd.go:534] Images already preloaded, skipping extraction
	I1129 09:24:12.332725  244729 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:24:12.359642  244729 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:24:12.359726  244729 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:24:12.359751  244729 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1129 09:24:12.359902  244729 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-086358 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-086358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:24:12.359994  244729 ssh_runner.go:195] Run: sudo crictl info
	I1129 09:24:12.388576  244729 cni.go:84] Creating CNI manager for ""
	I1129 09:24:12.388603  244729 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:24:12.388662  244729 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:24:12.388688  244729 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-086358 NodeName:embed-certs-086358 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:24:12.388820  244729 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-086358"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:24:12.388896  244729 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:24:12.397252  244729 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:24:12.397322  244729 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:24:12.404903  244729 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1129 09:24:12.417896  244729 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:24:12.430645  244729 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1129 09:24:12.443545  244729 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:24:12.447233  244729 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:24:12.457981  244729 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:24:12.569560  244729 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:24:12.588142  244729 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/embed-certs-086358 for IP: 192.168.76.2
	I1129 09:24:12.588218  244729 certs.go:195] generating shared ca certs ...
	I1129 09:24:12.588249  244729 certs.go:227] acquiring lock for ca certs: {Name:mke655c14945a8520f2f9de36531df923afb2bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:24:12.588417  244729 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-2317/.minikube/ca.key
	I1129 09:24:12.588513  244729 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.key
	I1129 09:24:12.588541  244729 certs.go:257] generating profile certs ...
	I1129 09:24:12.588754  244729 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/embed-certs-086358/client.key
	I1129 09:24:12.588864  244729 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/embed-certs-086358/apiserver.key.d6dcf241
	I1129 09:24:12.588937  244729 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/embed-certs-086358/proxy-client.key
	I1129 09:24:12.589079  244729 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137.pem (1338 bytes)
	W1129 09:24:12.589145  244729 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137_empty.pem, impossibly tiny 0 bytes
	I1129 09:24:12.589174  244729 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:24:12.589231  244729 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/ca.pem (1082 bytes)
	I1129 09:24:12.589289  244729 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:24:12.589341  244729 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/certs/key.pem (1679 bytes)
	I1129 09:24:12.589426  244729 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem (1708 bytes)
	I1129 09:24:12.590092  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:24:12.616180  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1129 09:24:12.635243  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:24:12.653833  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:24:12.673023  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/embed-certs-086358/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1129 09:24:12.697188  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/embed-certs-086358/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:24:12.715703  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/embed-certs-086358/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:24:12.734522  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/embed-certs-086358/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:24:12.760361  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/certs/4137.pem --> /usr/share/ca-certificates/4137.pem (1338 bytes)
	I1129 09:24:12.798536  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/ssl/certs/41372.pem --> /usr/share/ca-certificates/41372.pem (1708 bytes)
	I1129 09:24:12.824657  244729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-2317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:24:12.850589  244729 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:24:12.867341  244729 ssh_runner.go:195] Run: openssl version
	I1129 09:24:12.874558  244729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4137.pem && ln -fs /usr/share/ca-certificates/4137.pem /etc/ssl/certs/4137.pem"
	I1129 09:24:12.883348  244729 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4137.pem
	I1129 09:24:12.887127  244729 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:36 /usr/share/ca-certificates/4137.pem
	I1129 09:24:12.887240  244729 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4137.pem
	I1129 09:24:12.935609  244729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4137.pem /etc/ssl/certs/51391683.0"
	I1129 09:24:12.944163  244729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41372.pem && ln -fs /usr/share/ca-certificates/41372.pem /etc/ssl/certs/41372.pem"
	I1129 09:24:12.953810  244729 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41372.pem
	I1129 09:24:12.958540  244729 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:36 /usr/share/ca-certificates/41372.pem
	I1129 09:24:12.958607  244729 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41372.pem
	I1129 09:24:13.000925  244729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41372.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:24:13.010359  244729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:24:13.019426  244729 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:24:13.023411  244729 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:24:13.023493  244729 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:24:13.065818  244729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:24:13.074138  244729 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:24:13.078259  244729 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 09:24:13.126892  244729 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 09:24:13.169645  244729 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 09:24:13.225339  244729 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 09:24:13.285692  244729 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 09:24:13.350935  244729 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 09:24:13.449811  244729 kubeadm.go:401] StartCluster: {Name:embed-certs-086358 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-086358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:24:13.449943  244729 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1129 09:24:13.450090  244729 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:24:13.517134  244729 cri.go:89] found id: "142f1b95a243cf67c1135910d74d40a419cf06ed0bf5077f6568ab892160c97a"
	I1129 09:24:13.517167  244729 cri.go:89] found id: "71da9bf637f997fd41268b358e43d421edcd0b0f351938a5905ffb2acc33b21f"
	I1129 09:24:13.517181  244729 cri.go:89] found id: "463144a8348fe09690fae6daaf1a23bd6db8686609b47d2764b6e39f5bbda974"
	I1129 09:24:13.517185  244729 cri.go:89] found id: "0221d25cfd4ddcdcc16f4f520608d24d9dfa2e0df4ef9c1eb5526108818141b0"
	I1129 09:24:13.517215  244729 cri.go:89] found id: "c0577342962bca3db58da726fcac889eec75133a917bc6e9cf1feb6a3f337e59"
	I1129 09:24:13.517225  244729 cri.go:89] found id: "63d03d07ac0a1758cd00c71c131868b3e936406ac3079afa609a554f2c6c1c6a"
	I1129 09:24:13.517229  244729 cri.go:89] found id: "9a782a50e3036c97768d6ec56613adcf9c14b720a7b95396868f2c8ae21e2c1d"
	I1129 09:24:13.517251  244729 cri.go:89] found id: "593a51223ee9a2a228c68dbef6b88d64186dd580dacb1aa36709e7d873bea72b"
	I1129 09:24:13.517254  244729 cri.go:89] found id: ""
	I1129 09:24:13.517337  244729 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1129 09:24:13.541068  244729 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"25b3e76fc4e0d09d34c49e38801b038101d09094796a22c2a75e6c00ab809383","pid":870,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25b3e76fc4e0d09d34c49e38801b038101d09094796a22c2a75e6c00ab809383","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25b3e76fc4e0d09d34c49e38801b038101d09094796a22c2a75e6c00ab809383/rootfs","created":"2025-11-29T09:24:13.410790953Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"25b3e76fc4e0d09d34c49e38801b038101d09094796a22c2a75e6c00ab809383","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-086358_c7ec7c736ca272174da91dd89bf4beb7","io.kubernetes.cri.san
dbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-embed-certs-086358","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c7ec7c736ca272174da91dd89bf4beb7"},"owner":"root"},{"ociVersion":"1.2.1","id":"88204bb283762f3f1353f6f44172530cdf3fe8f277f40ae904313fce13cb1ace","pid":903,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88204bb283762f3f1353f6f44172530cdf3fe8f277f40ae904313fce13cb1ace","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88204bb283762f3f1353f6f44172530cdf3fe8f277f40ae904313fce13cb1ace/rootfs","created":"2025-11-29T09:24:13.430873559Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"88204bb283762f3f1353f6f44172530cdf3fe8f277f40ae904313fce13cb1a
ce","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-086358_1b636d1bccc4c9706d219cde67be2f6e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-embed-certs-086358","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1b636d1bccc4c9706d219cde67be2f6e"},"owner":"root"},{"ociVersion":"1.2.1","id":"f6a18b03e65003116c301eb4c1573f173a0d23aee020ad5fee165960260e4916","pid":936,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6a18b03e65003116c301eb4c1573f173a0d23aee020ad5fee165960260e4916","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6a18b03e65003116c301eb4c1573f173a0d23aee020ad5fee165960260e4916/rootfs","created":"2025-11-29T09:24:13.496374964Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-
cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"f6a18b03e65003116c301eb4c1573f173a0d23aee020ad5fee165960260e4916","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-086358_ad583e0080dbc35d38398d9c570ec954","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-embed-certs-086358","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ad583e0080dbc35d38398d9c570ec954"},"owner":"root"}]
	I1129 09:24:13.541285  244729 cri.go:126] list returned 3 containers
	I1129 09:24:13.541310  244729 cri.go:129] container: {ID:25b3e76fc4e0d09d34c49e38801b038101d09094796a22c2a75e6c00ab809383 Status:running}
	I1129 09:24:13.541360  244729 cri.go:131] skipping 25b3e76fc4e0d09d34c49e38801b038101d09094796a22c2a75e6c00ab809383 - not in ps
	I1129 09:24:13.541371  244729 cri.go:129] container: {ID:88204bb283762f3f1353f6f44172530cdf3fe8f277f40ae904313fce13cb1ace Status:created}
	I1129 09:24:13.541380  244729 cri.go:131] skipping 88204bb283762f3f1353f6f44172530cdf3fe8f277f40ae904313fce13cb1ace - not in ps
	I1129 09:24:13.541392  244729 cri.go:129] container: {ID:f6a18b03e65003116c301eb4c1573f173a0d23aee020ad5fee165960260e4916 Status:created}
	I1129 09:24:13.541455  244729 cri.go:131] skipping f6a18b03e65003116c301eb4c1573f173a0d23aee020ad5fee165960260e4916 - not in ps
	I1129 09:24:13.541594  244729 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:24:13.556666  244729 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 09:24:13.556690  244729 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 09:24:13.556796  244729 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 09:24:13.567202  244729 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:24:13.567831  244729 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-086358" does not appear in /home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:24:13.568114  244729 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-2317/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-086358" cluster setting kubeconfig missing "embed-certs-086358" context setting]
	I1129 09:24:13.568667  244729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/kubeconfig: {Name:mk3c09eb9158ba85342a695b6ac4b1a5f69e1b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:24:13.570198  244729 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 09:24:13.593413  244729 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1129 09:24:13.593457  244729 kubeadm.go:602] duration metric: took 36.760991ms to restartPrimaryControlPlane
	I1129 09:24:13.593484  244729 kubeadm.go:403] duration metric: took 143.695012ms to StartCluster
	I1129 09:24:13.593508  244729 settings.go:142] acquiring lock: {Name:mk44917d1324740eeda65bf3aa312ad1561d3ed4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:24:13.593642  244729 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:24:13.595164  244729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/kubeconfig: {Name:mk3c09eb9158ba85342a695b6ac4b1a5f69e1b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:24:13.595836  244729 config.go:182] Loaded profile config "embed-certs-086358": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:24:13.595888  244729 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:24:13.595948  244729 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:24:13.596022  244729 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-086358"
	I1129 09:24:13.596043  244729 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-086358"
	W1129 09:24:13.596049  244729 addons.go:248] addon storage-provisioner should already be in state true
	I1129 09:24:13.596072  244729 host.go:66] Checking if "embed-certs-086358" exists ...
	I1129 09:24:13.596575  244729 cli_runner.go:164] Run: docker container inspect embed-certs-086358 --format={{.State.Status}}
	I1129 09:24:13.597267  244729 addons.go:70] Setting default-storageclass=true in profile "embed-certs-086358"
	I1129 09:24:13.597301  244729 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-086358"
	I1129 09:24:13.597423  244729 addons.go:70] Setting metrics-server=true in profile "embed-certs-086358"
	I1129 09:24:13.597439  244729 addons.go:239] Setting addon metrics-server=true in "embed-certs-086358"
	W1129 09:24:13.597446  244729 addons.go:248] addon metrics-server should already be in state true
	I1129 09:24:13.597468  244729 host.go:66] Checking if "embed-certs-086358" exists ...
	I1129 09:24:13.597600  244729 cli_runner.go:164] Run: docker container inspect embed-certs-086358 --format={{.State.Status}}
	I1129 09:24:13.597909  244729 cli_runner.go:164] Run: docker container inspect embed-certs-086358 --format={{.State.Status}}
	I1129 09:24:13.600895  244729 addons.go:70] Setting dashboard=true in profile "embed-certs-086358"
	I1129 09:24:13.600924  244729 addons.go:239] Setting addon dashboard=true in "embed-certs-086358"
	W1129 09:24:13.601128  244729 addons.go:248] addon dashboard should already be in state true
	I1129 09:24:13.601165  244729 host.go:66] Checking if "embed-certs-086358" exists ...
	I1129 09:24:13.608138  244729 out.go:179] * Verifying Kubernetes components...
	I1129 09:24:13.610194  244729 cli_runner.go:164] Run: docker container inspect embed-certs-086358 --format={{.State.Status}}
	I1129 09:24:13.611896  244729 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:24:13.640493  244729 addons.go:239] Setting addon default-storageclass=true in "embed-certs-086358"
	W1129 09:24:13.640567  244729 addons.go:248] addon default-storageclass should already be in state true
	I1129 09:24:13.640607  244729 host.go:66] Checking if "embed-certs-086358" exists ...
	I1129 09:24:13.641154  244729 cli_runner.go:164] Run: docker container inspect embed-certs-086358 --format={{.State.Status}}
	I1129 09:24:13.691005  244729 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:24:13.694839  244729 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:24:13.694861  244729 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:24:13.694961  244729 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:24:13.694976  244729 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:24:13.695032  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:13.695063  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:13.712096  244729 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1129 09:24:13.712110  244729 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1129 09:24:13.716114  244729 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 09:24:13.717933  244729 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1129 09:24:13.717955  244729 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1129 09:24:13.718025  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:13.724509  244729 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 09:24:13.724545  244729 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 09:24:13.724758  244729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-086358
	I1129 09:24:13.761058  244729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/embed-certs-086358/id_rsa Username:docker}
	I1129 09:24:13.768010  244729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/embed-certs-086358/id_rsa Username:docker}
	I1129 09:24:13.788833  244729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/embed-certs-086358/id_rsa Username:docker}
	I1129 09:24:13.797023  244729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/embed-certs-086358/id_rsa Username:docker}
	I1129 09:24:14.040589  244729 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:24:14.170216  244729 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:24:14.179858  244729 node_ready.go:35] waiting up to 6m0s for node "embed-certs-086358" to be "Ready" ...
	I1129 09:24:14.357642  244729 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1129 09:24:14.357711  244729 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1129 09:24:14.373863  244729 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:24:14.589631  244729 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1129 09:24:14.589699  244729 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1129 09:24:14.788141  244729 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1129 09:24:14.788208  244729 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1129 09:24:14.894948  244729 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 09:24:14.895013  244729 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 09:24:15.020436  244729 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1129 09:24:15.175046  244729 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 09:24:15.175122  244729 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 09:24:15.298748  244729 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 09:24:15.298775  244729 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 09:24:15.384364  244729 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 09:24:15.384390  244729 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 09:24:15.441830  244729 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 09:24:15.441904  244729 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 09:24:15.507101  244729 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 09:24:15.507184  244729 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 09:24:15.566346  244729 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 09:24:15.566375  244729 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 09:24:15.601213  244729 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 09:24:15.601242  244729 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 09:24:15.630607  244729 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 09:24:15.630633  244729 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 09:24:15.667943  244729 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1129 09:24:13.330519  240275 node_ready.go:57] node "default-k8s-diff-port-528769" has "Ready":"False" status (will retry)
	I1129 09:24:13.837584  240275 node_ready.go:49] node "default-k8s-diff-port-528769" is "Ready"
	I1129 09:24:13.837625  240275 node_ready.go:38] duration metric: took 39.511473274s for node "default-k8s-diff-port-528769" to be "Ready" ...
	I1129 09:24:13.837641  240275 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:24:13.837703  240275 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:24:13.888608  240275 api_server.go:72] duration metric: took 40.77055457s to wait for apiserver process to appear ...
	I1129 09:24:13.888657  240275 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:24:13.888684  240275 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1129 09:24:13.902197  240275 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1129 09:24:13.911385  240275 api_server.go:141] control plane version: v1.34.1
	I1129 09:24:13.911472  240275 api_server.go:131] duration metric: took 22.805748ms to wait for apiserver health ...
	I1129 09:24:13.911498  240275 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:24:13.932369  240275 system_pods.go:59] 8 kube-system pods found
	I1129 09:24:13.932411  240275 system_pods.go:61] "coredns-66bc5c9577-ctldr" [93f75ca1-8d71-403e-800c-4e8dfdcdecd7] Pending
	I1129 09:24:13.932418  240275 system_pods.go:61] "etcd-default-k8s-diff-port-528769" [71ce5ce8-1e99-4a37-a8d6-6e431a9bb7f0] Running
	I1129 09:24:13.932422  240275 system_pods.go:61] "kindnet-kbqpv" [a2e00f40-c25d-4a2c-bac7-625ebd0f84de] Running
	I1129 09:24:13.932427  240275 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-528769" [d0d8dd1a-0031-4f91-b707-a269ba65d0cb] Running
	I1129 09:24:13.932431  240275 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-528769" [afc764fa-530b-4a51-af87-d3800da90c3f] Running
	I1129 09:24:13.932434  240275 system_pods.go:61] "kube-proxy-2gqpj" [9e27282e-db8e-430f-84db-c3ee57d5ff85] Running
	I1129 09:24:13.932438  240275 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-528769" [1e369541-195e-4df1-9527-732b37ad7172] Running
	I1129 09:24:13.932443  240275 system_pods.go:61] "storage-provisioner" [a5ab4c77-abf4-473f-aca7-608c3f1aac39] Pending
	I1129 09:24:13.932449  240275 system_pods.go:74] duration metric: took 20.916105ms to wait for pod list to return data ...
	I1129 09:24:13.932459  240275 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:24:13.941567  240275 default_sa.go:45] found service account: "default"
	I1129 09:24:13.941598  240275 default_sa.go:55] duration metric: took 9.132838ms for default service account to be created ...
	I1129 09:24:13.941608  240275 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:24:13.965924  240275 system_pods.go:86] 8 kube-system pods found
	I1129 09:24:13.965964  240275 system_pods.go:89] "coredns-66bc5c9577-ctldr" [93f75ca1-8d71-403e-800c-4e8dfdcdecd7] Pending
	I1129 09:24:13.965971  240275 system_pods.go:89] "etcd-default-k8s-diff-port-528769" [71ce5ce8-1e99-4a37-a8d6-6e431a9bb7f0] Running
	I1129 09:24:13.965979  240275 system_pods.go:89] "kindnet-kbqpv" [a2e00f40-c25d-4a2c-bac7-625ebd0f84de] Running
	I1129 09:24:13.965984  240275 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-528769" [d0d8dd1a-0031-4f91-b707-a269ba65d0cb] Running
	I1129 09:24:13.965989  240275 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-528769" [afc764fa-530b-4a51-af87-d3800da90c3f] Running
	I1129 09:24:13.965993  240275 system_pods.go:89] "kube-proxy-2gqpj" [9e27282e-db8e-430f-84db-c3ee57d5ff85] Running
	I1129 09:24:13.965998  240275 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-528769" [1e369541-195e-4df1-9527-732b37ad7172] Running
	I1129 09:24:13.966040  240275 system_pods.go:89] "storage-provisioner" [a5ab4c77-abf4-473f-aca7-608c3f1aac39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:24:13.966064  240275 retry.go:31] will retry after 311.571586ms: missing components: kube-dns
	I1129 09:24:14.282202  240275 system_pods.go:86] 8 kube-system pods found
	I1129 09:24:14.282243  240275 system_pods.go:89] "coredns-66bc5c9577-ctldr" [93f75ca1-8d71-403e-800c-4e8dfdcdecd7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:24:14.282252  240275 system_pods.go:89] "etcd-default-k8s-diff-port-528769" [71ce5ce8-1e99-4a37-a8d6-6e431a9bb7f0] Running
	I1129 09:24:14.282259  240275 system_pods.go:89] "kindnet-kbqpv" [a2e00f40-c25d-4a2c-bac7-625ebd0f84de] Running
	I1129 09:24:14.282263  240275 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-528769" [d0d8dd1a-0031-4f91-b707-a269ba65d0cb] Running
	I1129 09:24:14.282268  240275 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-528769" [afc764fa-530b-4a51-af87-d3800da90c3f] Running
	I1129 09:24:14.282272  240275 system_pods.go:89] "kube-proxy-2gqpj" [9e27282e-db8e-430f-84db-c3ee57d5ff85] Running
	I1129 09:24:14.282276  240275 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-528769" [1e369541-195e-4df1-9527-732b37ad7172] Running
	I1129 09:24:14.282283  240275 system_pods.go:89] "storage-provisioner" [a5ab4c77-abf4-473f-aca7-608c3f1aac39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:24:14.282298  240275 retry.go:31] will retry after 347.295337ms: missing components: kube-dns
	I1129 09:24:14.634494  240275 system_pods.go:86] 8 kube-system pods found
	I1129 09:24:14.634533  240275 system_pods.go:89] "coredns-66bc5c9577-ctldr" [93f75ca1-8d71-403e-800c-4e8dfdcdecd7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:24:14.634541  240275 system_pods.go:89] "etcd-default-k8s-diff-port-528769" [71ce5ce8-1e99-4a37-a8d6-6e431a9bb7f0] Running
	I1129 09:24:14.634548  240275 system_pods.go:89] "kindnet-kbqpv" [a2e00f40-c25d-4a2c-bac7-625ebd0f84de] Running
	I1129 09:24:14.634553  240275 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-528769" [d0d8dd1a-0031-4f91-b707-a269ba65d0cb] Running
	I1129 09:24:14.634565  240275 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-528769" [afc764fa-530b-4a51-af87-d3800da90c3f] Running
	I1129 09:24:14.634572  240275 system_pods.go:89] "kube-proxy-2gqpj" [9e27282e-db8e-430f-84db-c3ee57d5ff85] Running
	I1129 09:24:14.634576  240275 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-528769" [1e369541-195e-4df1-9527-732b37ad7172] Running
	I1129 09:24:14.634584  240275 system_pods.go:89] "storage-provisioner" [a5ab4c77-abf4-473f-aca7-608c3f1aac39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:24:14.634604  240275 retry.go:31] will retry after 330.852195ms: missing components: kube-dns
	I1129 09:24:14.984363  240275 system_pods.go:86] 8 kube-system pods found
	I1129 09:24:14.984400  240275 system_pods.go:89] "coredns-66bc5c9577-ctldr" [93f75ca1-8d71-403e-800c-4e8dfdcdecd7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:24:14.984408  240275 system_pods.go:89] "etcd-default-k8s-diff-port-528769" [71ce5ce8-1e99-4a37-a8d6-6e431a9bb7f0] Running
	I1129 09:24:14.984415  240275 system_pods.go:89] "kindnet-kbqpv" [a2e00f40-c25d-4a2c-bac7-625ebd0f84de] Running
	I1129 09:24:14.984419  240275 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-528769" [d0d8dd1a-0031-4f91-b707-a269ba65d0cb] Running
	I1129 09:24:14.984423  240275 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-528769" [afc764fa-530b-4a51-af87-d3800da90c3f] Running
	I1129 09:24:14.984428  240275 system_pods.go:89] "kube-proxy-2gqpj" [9e27282e-db8e-430f-84db-c3ee57d5ff85] Running
	I1129 09:24:14.984431  240275 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-528769" [1e369541-195e-4df1-9527-732b37ad7172] Running
	I1129 09:24:14.984437  240275 system_pods.go:89] "storage-provisioner" [a5ab4c77-abf4-473f-aca7-608c3f1aac39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:24:14.984445  240275 system_pods.go:126] duration metric: took 1.042830835s to wait for k8s-apps to be running ...
	I1129 09:24:14.984453  240275 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:24:14.984507  240275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:24:15.034493  240275 system_svc.go:56] duration metric: took 50.029542ms WaitForService to wait for kubelet
	I1129 09:24:15.034527  240275 kubeadm.go:587] duration metric: took 41.91648463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:24:15.034584  240275 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:24:15.084196  240275 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 09:24:15.084229  240275 node_conditions.go:123] node cpu capacity is 2
	I1129 09:24:15.084242  240275 node_conditions.go:105] duration metric: took 49.644572ms to run NodePressure ...
	I1129 09:24:15.084277  240275 start.go:242] waiting for startup goroutines ...
	I1129 09:24:15.084293  240275 start.go:247] waiting for cluster config update ...
	I1129 09:24:15.084306  240275 start.go:256] writing updated cluster config ...
	I1129 09:24:15.084653  240275 ssh_runner.go:195] Run: rm -f paused
	I1129 09:24:15.088604  240275 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:24:15.111106  240275 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ctldr" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:15.127162  240275 pod_ready.go:94] pod "coredns-66bc5c9577-ctldr" is "Ready"
	I1129 09:24:15.127191  240275 pod_ready.go:86] duration metric: took 16.054824ms for pod "coredns-66bc5c9577-ctldr" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:15.134974  240275 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-528769" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:15.144596  240275 pod_ready.go:94] pod "etcd-default-k8s-diff-port-528769" is "Ready"
	I1129 09:24:15.144711  240275 pod_ready.go:86] duration metric: took 9.703328ms for pod "etcd-default-k8s-diff-port-528769" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:15.149891  240275 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-528769" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:15.156220  240275 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-528769" is "Ready"
	I1129 09:24:15.156248  240275 pod_ready.go:86] duration metric: took 6.330466ms for pod "kube-apiserver-default-k8s-diff-port-528769" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:15.162695  240275 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-528769" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:15.493140  240275 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-528769" is "Ready"
	I1129 09:24:15.493167  240275 pod_ready.go:86] duration metric: took 330.448997ms for pod "kube-controller-manager-default-k8s-diff-port-528769" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:15.693783  240275 pod_ready.go:83] waiting for pod "kube-proxy-2gqpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:16.093282  240275 pod_ready.go:94] pod "kube-proxy-2gqpj" is "Ready"
	I1129 09:24:16.093314  240275 pod_ready.go:86] duration metric: took 399.498541ms for pod "kube-proxy-2gqpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:16.293472  240275 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-528769" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:16.693529  240275 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-528769" is "Ready"
	I1129 09:24:16.693558  240275 pod_ready.go:86] duration metric: took 400.058282ms for pod "kube-scheduler-default-k8s-diff-port-528769" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:16.693572  240275 pod_ready.go:40] duration metric: took 1.604882772s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:24:16.803967  240275 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 09:24:16.807986  240275 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-528769" cluster and "default" namespace by default
	I1129 09:24:19.089120  244729 node_ready.go:49] node "embed-certs-086358" is "Ready"
	I1129 09:24:19.089148  244729 node_ready.go:38] duration metric: took 4.909218102s for node "embed-certs-086358" to be "Ready" ...
	I1129 09:24:19.089161  244729 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:24:19.089220  244729 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:24:19.357422  244729 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.187168109s)
	I1129 09:24:21.764211  244729 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.3903087s)
	I1129 09:24:21.814241  244729 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.793761819s)
	I1129 09:24:21.814277  244729 addons.go:495] Verifying addon metrics-server=true in "embed-certs-086358"
	I1129 09:24:21.814383  244729 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.146399725s)
	I1129 09:24:21.814573  244729 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.725340975s)
	I1129 09:24:21.814593  244729 api_server.go:72] duration metric: took 8.217827537s to wait for apiserver process to appear ...
	I1129 09:24:21.814599  244729 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:24:21.814624  244729 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:24:21.817762  244729 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-086358 addons enable metrics-server
	
	I1129 09:24:21.820786  244729 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1129 09:24:21.823248  244729 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:24:21.823281  244729 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:24:21.824543  244729 addons.go:530] duration metric: took 8.228593764s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1129 09:24:22.314854  244729 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:24:22.323101  244729 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 09:24:22.324154  244729 api_server.go:141] control plane version: v1.34.1
	I1129 09:24:22.324182  244729 api_server.go:131] duration metric: took 509.576989ms to wait for apiserver health ...
	I1129 09:24:22.324193  244729 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:24:22.328151  244729 system_pods.go:59] 9 kube-system pods found
	I1129 09:24:22.328189  244729 system_pods.go:61] "coredns-66bc5c9577-2fhrs" [224b9d8a-65f2-44ed-b5b3-9b8f39ac6854] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:24:22.328205  244729 system_pods.go:61] "etcd-embed-certs-086358" [674a8f81-94b4-41ce-94c2-90cb52b67601] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:24:22.328212  244729 system_pods.go:61] "kindnet-2x7dg" [4945072e-8049-437d-8593-8f1de5316222] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 09:24:22.328219  244729 system_pods.go:61] "kube-apiserver-embed-certs-086358" [68dfb4c7-7463-4946-bbef-d3002539fd2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:24:22.328234  244729 system_pods.go:61] "kube-controller-manager-embed-certs-086358" [c5085977-e0b5-48d7-8a13-40e11f6c63e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:24:22.328248  244729 system_pods.go:61] "kube-proxy-2qzkl" [2def38f6-3e34-4e81-a66a-59f10b8fc3a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 09:24:22.328284  244729 system_pods.go:61] "kube-scheduler-embed-certs-086358" [f2afa9a4-1299-470a-a815-c0cf65b82307] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:24:22.328296  244729 system_pods.go:61] "metrics-server-746fcd58dc-dc5c4" [51467193-8be4-44e0-9cf2-e54613662115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 09:24:22.328302  244729 system_pods.go:61] "storage-provisioner" [e08be393-d772-4606-bb5b-b754bee79505] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:24:22.328307  244729 system_pods.go:74] duration metric: took 4.108882ms to wait for pod list to return data ...
	I1129 09:24:22.328317  244729 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:24:22.332684  244729 default_sa.go:45] found service account: "default"
	I1129 09:24:22.332713  244729 default_sa.go:55] duration metric: took 4.388999ms for default service account to be created ...
	I1129 09:24:22.332726  244729 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:24:22.337849  244729 system_pods.go:86] 9 kube-system pods found
	I1129 09:24:22.337882  244729 system_pods.go:89] "coredns-66bc5c9577-2fhrs" [224b9d8a-65f2-44ed-b5b3-9b8f39ac6854] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:24:22.337891  244729 system_pods.go:89] "etcd-embed-certs-086358" [674a8f81-94b4-41ce-94c2-90cb52b67601] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:24:22.337900  244729 system_pods.go:89] "kindnet-2x7dg" [4945072e-8049-437d-8593-8f1de5316222] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 09:24:22.337907  244729 system_pods.go:89] "kube-apiserver-embed-certs-086358" [68dfb4c7-7463-4946-bbef-d3002539fd2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:24:22.337914  244729 system_pods.go:89] "kube-controller-manager-embed-certs-086358" [c5085977-e0b5-48d7-8a13-40e11f6c63e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:24:22.337926  244729 system_pods.go:89] "kube-proxy-2qzkl" [2def38f6-3e34-4e81-a66a-59f10b8fc3a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 09:24:22.337932  244729 system_pods.go:89] "kube-scheduler-embed-certs-086358" [f2afa9a4-1299-470a-a815-c0cf65b82307] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:24:22.337941  244729 system_pods.go:89] "metrics-server-746fcd58dc-dc5c4" [51467193-8be4-44e0-9cf2-e54613662115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 09:24:22.337953  244729 system_pods.go:89] "storage-provisioner" [e08be393-d772-4606-bb5b-b754bee79505] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:24:22.337960  244729 system_pods.go:126] duration metric: took 5.228585ms to wait for k8s-apps to be running ...
	I1129 09:24:22.337973  244729 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:24:22.338032  244729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:24:22.357396  244729 system_svc.go:56] duration metric: took 19.414582ms WaitForService to wait for kubelet
	I1129 09:24:22.357423  244729 kubeadm.go:587] duration metric: took 8.760655597s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:24:22.357443  244729 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:24:22.366590  244729 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 09:24:22.366619  244729 node_conditions.go:123] node cpu capacity is 2
	I1129 09:24:22.366631  244729 node_conditions.go:105] duration metric: took 9.183037ms to run NodePressure ...
	I1129 09:24:22.366644  244729 start.go:242] waiting for startup goroutines ...
	I1129 09:24:22.366651  244729 start.go:247] waiting for cluster config update ...
	I1129 09:24:22.366662  244729 start.go:256] writing updated cluster config ...
	I1129 09:24:22.366952  244729 ssh_runner.go:195] Run: rm -f paused
	I1129 09:24:22.373153  244729 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:24:22.377624  244729 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2fhrs" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 09:24:24.383517  244729 pod_ready.go:104] pod "coredns-66bc5c9577-2fhrs" is not "Ready", error: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	e773c8e5f04fb       1611cd07b61d5       10 seconds ago       Running             busybox                   0                   693368e1372e3       busybox                                                default
	e065b1d7f32b7       ba04bb24b9575       15 seconds ago       Running             storage-provisioner       0                   d13d89d8d1bc8       storage-provisioner                                    kube-system
	259f0db699021       138784d87c9c5       15 seconds ago       Running             coredns                   0                   109507c0808f5       coredns-66bc5c9577-ctldr                               kube-system
	e2750e199427d       b1a8c6f707935       57 seconds ago       Running             kindnet-cni               0                   22e5dbdc1c4ab       kindnet-kbqpv                                          kube-system
	a502369f7017e       05baa95f5142d       57 seconds ago       Running             kube-proxy                0                   5aedc7d875abd       kube-proxy-2gqpj                                       kube-system
	d7c6a263dd131       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   62c838fc3d19d       kube-scheduler-default-k8s-diff-port-528769            kube-system
	661e12966a53b       a1894772a478e       About a minute ago   Running             etcd                      0                   f30610c5f6ba6       etcd-default-k8s-diff-port-528769                      kube-system
	61f9930bff256       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   f4a57753833cd       kube-controller-manager-default-k8s-diff-port-528769   kube-system
	364a9c3a9acf5       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   869067602218b       kube-apiserver-default-k8s-diff-port-528769            kube-system
	
	
	==> containerd <==
	Nov 29 09:24:14 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:14.592523114Z" level=info msg="Container e065b1d7f32b727bba244d99ffad2350a5a573b263c5c91f7e6f3bbec7332107: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:24:14 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:14.594868326Z" level=info msg="CreateContainer within sandbox \"109507c0808f5ba56dd13d3b720d16cf949975155ba9f6adb0b48dc124f075a3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"259f0db699021fc88082b326e2747e9d6786d34dec60278315b7f50b3db4dfcf\""
	Nov 29 09:24:14 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:14.598964523Z" level=info msg="StartContainer for \"259f0db699021fc88082b326e2747e9d6786d34dec60278315b7f50b3db4dfcf\""
	Nov 29 09:24:14 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:14.600474743Z" level=info msg="connecting to shim 259f0db699021fc88082b326e2747e9d6786d34dec60278315b7f50b3db4dfcf" address="unix:///run/containerd/s/d7a868913f8106d92478231f95eba089e4dbc448931fa10ad71a8cd3d9558781" protocol=ttrpc version=3
	Nov 29 09:24:14 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:14.611655995Z" level=info msg="CreateContainer within sandbox \"d13d89d8d1bc8e1a36f044f6bf215e74ded5b1bc02d69426db52207cde52479e\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"e065b1d7f32b727bba244d99ffad2350a5a573b263c5c91f7e6f3bbec7332107\""
	Nov 29 09:24:14 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:14.618257791Z" level=info msg="StartContainer for \"e065b1d7f32b727bba244d99ffad2350a5a573b263c5c91f7e6f3bbec7332107\""
	Nov 29 09:24:14 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:14.625051243Z" level=info msg="connecting to shim e065b1d7f32b727bba244d99ffad2350a5a573b263c5c91f7e6f3bbec7332107" address="unix:///run/containerd/s/01482df1ac8a0207fb368df13b205c2c412f71ebb57f8730f9afcee6067b4a96" protocol=ttrpc version=3
	Nov 29 09:24:14 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:14.801380189Z" level=info msg="StartContainer for \"e065b1d7f32b727bba244d99ffad2350a5a573b263c5c91f7e6f3bbec7332107\" returns successfully"
	Nov 29 09:24:14 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:14.801541724Z" level=info msg="StartContainer for \"259f0db699021fc88082b326e2747e9d6786d34dec60278315b7f50b3db4dfcf\" returns successfully"
	Nov 29 09:24:17 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:17.752842357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:6ddeb490-d6e5-43be-98f2-27affe7aebb7,Namespace:default,Attempt:0,}"
	Nov 29 09:24:17 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:17.822769937Z" level=info msg="connecting to shim 693368e1372e3e90621506b01fc5e13c5116ff341ef619df8ad9d266b094d64c" address="unix:///run/containerd/s/de6850fb4c1a170ebe372b18fe55fa90c6452b0d88d48cb612981df16a30a8b2" namespace=k8s.io protocol=ttrpc version=3
	Nov 29 09:24:17 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:17.925310832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:6ddeb490-d6e5-43be-98f2-27affe7aebb7,Namespace:default,Attempt:0,} returns sandbox id \"693368e1372e3e90621506b01fc5e13c5116ff341ef619df8ad9d266b094d64c\""
	Nov 29 09:24:17 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:17.927424634Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.060690578Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.064136235Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937190"
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.067409182Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.070851836Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.071466183Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.143995691s"
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.071940368Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.085600093Z" level=info msg="CreateContainer within sandbox \"693368e1372e3e90621506b01fc5e13c5116ff341ef619df8ad9d266b094d64c\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.108278218Z" level=info msg="Container e773c8e5f04fb91a6e3981b96716ea4ca35a32aa8ca1b216a4bf7161f766975a: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.120356542Z" level=info msg="CreateContainer within sandbox \"693368e1372e3e90621506b01fc5e13c5116ff341ef619df8ad9d266b094d64c\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"e773c8e5f04fb91a6e3981b96716ea4ca35a32aa8ca1b216a4bf7161f766975a\""
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.131654505Z" level=info msg="StartContainer for \"e773c8e5f04fb91a6e3981b96716ea4ca35a32aa8ca1b216a4bf7161f766975a\""
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.132736374Z" level=info msg="connecting to shim e773c8e5f04fb91a6e3981b96716ea4ca35a32aa8ca1b216a4bf7161f766975a" address="unix:///run/containerd/s/de6850fb4c1a170ebe372b18fe55fa90c6452b0d88d48cb612981df16a30a8b2" protocol=ttrpc version=3
	Nov 29 09:24:20 default-k8s-diff-port-528769 containerd[756]: time="2025-11-29T09:24:20.288700775Z" level=info msg="StartContainer for \"e773c8e5f04fb91a6e3981b96716ea4ca35a32aa8ca1b216a4bf7161f766975a\" returns successfully"
	
	
	==> coredns [259f0db699021fc88082b326e2747e9d6786d34dec60278315b7f50b3db4dfcf] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33805 - 17985 "HINFO IN 7772944422344697396.6105932290377148216. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.041076126s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-528769
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-528769
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=default-k8s-diff-port-528769
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_23_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:23:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-528769
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:24:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:24:29 +0000   Sat, 29 Nov 2025 09:23:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:24:29 +0000   Sat, 29 Nov 2025 09:23:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:24:29 +0000   Sat, 29 Nov 2025 09:23:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:24:29 +0000   Sat, 29 Nov 2025 09:24:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-528769
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                12e356a0-2870-4f3c-9aee-edeacd128acb
	  Boot ID:                    6647f078-4edd-40c5-9d0e-49eb5ed00bd7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-ctldr                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     58s
	  kube-system                 etcd-default-k8s-diff-port-528769                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-kbqpv                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      59s
	  kube-system                 kube-apiserver-default-k8s-diff-port-528769             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-528769    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-2gqpj                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-scheduler-default-k8s-diff-port-528769             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 56s                kube-proxy       
	  Warning  CgroupV1                 71s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-528769 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-528769 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x7 over 71s)  kubelet          Node default-k8s-diff-port-528769 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  63s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  63s                kubelet          Node default-k8s-diff-port-528769 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s                kubelet          Node default-k8s-diff-port-528769 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s                kubelet          Node default-k8s-diff-port-528769 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           59s                node-controller  Node default-k8s-diff-port-528769 event: Registered Node default-k8s-diff-port-528769 in Controller
	  Normal   NodeReady                17s                kubelet          Node default-k8s-diff-port-528769 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014634] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.570975] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032231] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767655] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.282538] kauditd_printk_skb: 36 callbacks suppressed
	[Nov29 08:39] FS-Cache: Duplicate cookie detected
	[  +0.000727] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001077] FS-Cache: O-cookie d=00000000b08097f7{9P.session} n=00000000a17ba85f
	[  +0.001074] FS-Cache: O-key=[10] '34323935323231393134'
	[  +0.000776] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000954] FS-Cache: N-cookie d=00000000b08097f7{9P.session} n=00000000534469ad
	[  +0.001092] FS-Cache: N-key=[10] '34323935323231393134'
	[Nov29 09:19] hrtimer: interrupt took 12545193 ns
	
	
	==> etcd [661e12966a53be87d5a2e6fb355185b6f2734aeca24c9e641c7fa53c37209721] <==
	{"level":"warn","ts":"2025-11-29T09:23:23.231019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.266191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.281780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.297924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.318248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.333998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.359223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.377805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.395837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.414266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.432564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.453415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.474224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.489032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.509856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.536856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.564247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.587528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.609973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.625275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.680984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.701783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.719986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.739287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:23:23.815769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40852","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:24:30 up  1:07,  0 user,  load average: 3.88, 3.59, 3.03
	Linux default-k8s-diff-port-528769 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e2750e199427d877c7b53679b11cd0c165939fd88df4d918d6d213be02be3cc4] <==
	I1129 09:23:33.584709       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:23:33.584996       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1129 09:23:33.585110       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:23:33.585122       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:23:33.585132       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:23:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:23:33.786289       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:23:33.786313       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:23:33.786323       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:23:33.787301       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1129 09:24:03.787031       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1129 09:24:03.787144       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1129 09:24:03.787277       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1129 09:24:03.787401       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1129 09:24:05.386875       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:24:05.386902       1 metrics.go:72] Registering metrics
	I1129 09:24:05.386966       1 controller.go:711] "Syncing nftables rules"
	I1129 09:24:13.795443       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:24:13.795489       1 main.go:301] handling current node
	I1129 09:24:23.787328       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:24:23.787369       1 main.go:301] handling current node
	
	
	==> kube-apiserver [364a9c3a9acf58cb422339f5fcc65e96d64f74407b8d012969344a32689511e6] <==
	E1129 09:23:24.921595       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1129 09:23:24.948570       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 09:23:24.948596       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:23:24.953622       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:23:24.954067       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:23:24.958791       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:23:25.105631       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:23:25.560754       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:23:25.568051       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:23:25.568365       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:23:26.350167       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:23:26.407460       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:23:26.473890       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:23:26.481755       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1129 09:23:26.483091       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:23:26.488901       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:23:26.761455       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:23:27.717216       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:23:27.734168       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:23:27.744612       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:23:31.862769       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1129 09:23:32.062627       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:23:32.875875       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:23:32.895989       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1129 09:24:26.325679       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:60292: use of closed network connection
	
	
	==> kube-controller-manager [61f9930bff256fbd985bb52ca9131ffb487a2f8940662219e3745714f98a0f4a] <==
	I1129 09:23:31.760579       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1129 09:23:31.760663       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 09:23:31.760863       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:23:31.760998       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 09:23:31.761041       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 09:23:31.762341       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1129 09:23:31.764732       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 09:23:31.765095       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:23:31.765196       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 09:23:31.765274       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 09:23:31.765366       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:23:31.767132       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:23:31.775733       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1129 09:23:31.786373       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:23:31.797109       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 09:23:31.802610       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:23:31.804909       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 09:23:31.807689       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 09:23:31.808036       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 09:23:31.809274       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1129 09:23:31.809281       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 09:23:31.809700       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1129 09:23:31.828833       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 09:23:31.830061       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:24:16.818584       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a502369f7017ed670f089183ad3806a271bb55615ca62f564dfdb79a3c5f3044] <==
	I1129 09:23:33.497963       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:23:33.587379       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:23:33.688359       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:23:33.688397       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1129 09:23:33.688477       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:23:33.719414       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:23:33.719469       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:23:33.727040       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:23:33.727707       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:23:33.727742       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:23:33.740158       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:23:33.740180       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:23:33.744670       1 config.go:309] "Starting node config controller"
	I1129 09:23:33.744697       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:23:33.744707       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:23:33.747371       1 config.go:200] "Starting service config controller"
	I1129 09:23:33.747396       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:23:33.747414       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:23:33.747420       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:23:33.842016       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:23:33.848005       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 09:23:33.848089       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [d7c6a263dd131373158be6025873d52b9ddefbf1920fd00eecb878044f55b34d] <==
	E1129 09:23:24.855631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 09:23:24.858984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:23:24.859624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:23:24.859710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:23:24.860763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:23:24.861026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:23:24.861498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:23:24.862421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:23:24.862632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:23:24.863278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:23:24.863457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:23:25.670278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:23:25.719109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:23:25.742101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:23:25.746636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:23:25.777484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:23:25.798268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:23:25.798806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:23:25.948244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1129 09:23:26.019546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:23:26.035660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:23:26.064213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 09:23:26.064469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:23:26.086332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1129 09:23:29.135242       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:23:31 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:31.925911    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2e00f40-c25d-4a2c-bac7-625ebd0f84de-lib-modules\") pod \"kindnet-kbqpv\" (UID: \"a2e00f40-c25d-4a2c-bac7-625ebd0f84de\") " pod="kube-system/kindnet-kbqpv"
	Nov 29 09:23:31 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:31.925955    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l9jr\" (UniqueName: \"kubernetes.io/projected/a2e00f40-c25d-4a2c-bac7-625ebd0f84de-kube-api-access-7l9jr\") pod \"kindnet-kbqpv\" (UID: \"a2e00f40-c25d-4a2c-bac7-625ebd0f84de\") " pod="kube-system/kindnet-kbqpv"
	Nov 29 09:23:31 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:31.925987    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a2e00f40-c25d-4a2c-bac7-625ebd0f84de-cni-cfg\") pod \"kindnet-kbqpv\" (UID: \"a2e00f40-c25d-4a2c-bac7-625ebd0f84de\") " pod="kube-system/kindnet-kbqpv"
	Nov 29 09:23:31 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:31.926005    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2e00f40-c25d-4a2c-bac7-625ebd0f84de-xtables-lock\") pod \"kindnet-kbqpv\" (UID: \"a2e00f40-c25d-4a2c-bac7-625ebd0f84de\") " pod="kube-system/kindnet-kbqpv"
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:32.027155    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e27282e-db8e-430f-84db-c3ee57d5ff85-xtables-lock\") pod \"kube-proxy-2gqpj\" (UID: \"9e27282e-db8e-430f-84db-c3ee57d5ff85\") " pod="kube-system/kube-proxy-2gqpj"
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:32.027228    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9e27282e-db8e-430f-84db-c3ee57d5ff85-kube-proxy\") pod \"kube-proxy-2gqpj\" (UID: \"9e27282e-db8e-430f-84db-c3ee57d5ff85\") " pod="kube-system/kube-proxy-2gqpj"
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:32.027248    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e27282e-db8e-430f-84db-c3ee57d5ff85-lib-modules\") pod \"kube-proxy-2gqpj\" (UID: \"9e27282e-db8e-430f-84db-c3ee57d5ff85\") " pod="kube-system/kube-proxy-2gqpj"
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:32.027268    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7bkd\" (UniqueName: \"kubernetes.io/projected/9e27282e-db8e-430f-84db-c3ee57d5ff85-kube-api-access-m7bkd\") pod \"kube-proxy-2gqpj\" (UID: \"9e27282e-db8e-430f-84db-c3ee57d5ff85\") " pod="kube-system/kube-proxy-2gqpj"
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: E1129 09:23:32.037123    1477 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: E1129 09:23:32.037170    1477 projected.go:196] Error preparing data for projected volume kube-api-access-7l9jr for pod kube-system/kindnet-kbqpv: configmap "kube-root-ca.crt" not found
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: E1129 09:23:32.037253    1477 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a2e00f40-c25d-4a2c-bac7-625ebd0f84de-kube-api-access-7l9jr podName:a2e00f40-c25d-4a2c-bac7-625ebd0f84de nodeName:}" failed. No retries permitted until 2025-11-29 09:23:32.537226381 +0000 UTC m=+4.984345994 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7l9jr" (UniqueName: "kubernetes.io/projected/a2e00f40-c25d-4a2c-bac7-625ebd0f84de-kube-api-access-7l9jr") pod "kindnet-kbqpv" (UID: "a2e00f40-c25d-4a2c-bac7-625ebd0f84de") : configmap "kube-root-ca.crt" not found
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: E1129 09:23:32.145008    1477 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: E1129 09:23:32.145221    1477 projected.go:196] Error preparing data for projected volume kube-api-access-m7bkd for pod kube-system/kube-proxy-2gqpj: configmap "kube-root-ca.crt" not found
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: E1129 09:23:32.145307    1477 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e27282e-db8e-430f-84db-c3ee57d5ff85-kube-api-access-m7bkd podName:9e27282e-db8e-430f-84db-c3ee57d5ff85 nodeName:}" failed. No retries permitted until 2025-11-29 09:23:32.64528531 +0000 UTC m=+5.092404915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m7bkd" (UniqueName: "kubernetes.io/projected/9e27282e-db8e-430f-84db-c3ee57d5ff85-kube-api-access-m7bkd") pod "kube-proxy-2gqpj" (UID: "9e27282e-db8e-430f-84db-c3ee57d5ff85") : configmap "kube-root-ca.crt" not found
	Nov 29 09:23:32 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:32.632535    1477 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 29 09:23:33 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:33.842597    1477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2gqpj" podStartSLOduration=2.842578654 podStartE2EDuration="2.842578654s" podCreationTimestamp="2025-11-29 09:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:23:33.822712631 +0000 UTC m=+6.269832236" watchObservedRunningTime="2025-11-29 09:23:33.842578654 +0000 UTC m=+6.289698259"
	Nov 29 09:23:35 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:23:35.979843    1477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kbqpv" podStartSLOduration=4.979824582 podStartE2EDuration="4.979824582s" podCreationTimestamp="2025-11-29 09:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:23:33.843071079 +0000 UTC m=+6.290190692" watchObservedRunningTime="2025-11-29 09:23:35.979824582 +0000 UTC m=+8.426944178"
	Nov 29 09:24:13 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:24:13.802941    1477 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 09:24:14 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:24:14.077933    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dswlr\" (UniqueName: \"kubernetes.io/projected/a5ab4c77-abf4-473f-aca7-608c3f1aac39-kube-api-access-dswlr\") pod \"storage-provisioner\" (UID: \"a5ab4c77-abf4-473f-aca7-608c3f1aac39\") " pod="kube-system/storage-provisioner"
	Nov 29 09:24:14 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:24:14.077990    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a5ab4c77-abf4-473f-aca7-608c3f1aac39-tmp\") pod \"storage-provisioner\" (UID: \"a5ab4c77-abf4-473f-aca7-608c3f1aac39\") " pod="kube-system/storage-provisioner"
	Nov 29 09:24:14 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:24:14.078014    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93f75ca1-8d71-403e-800c-4e8dfdcdecd7-config-volume\") pod \"coredns-66bc5c9577-ctldr\" (UID: \"93f75ca1-8d71-403e-800c-4e8dfdcdecd7\") " pod="kube-system/coredns-66bc5c9577-ctldr"
	Nov 29 09:24:14 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:24:14.078040    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhzc4\" (UniqueName: \"kubernetes.io/projected/93f75ca1-8d71-403e-800c-4e8dfdcdecd7-kube-api-access-jhzc4\") pod \"coredns-66bc5c9577-ctldr\" (UID: \"93f75ca1-8d71-403e-800c-4e8dfdcdecd7\") " pod="kube-system/coredns-66bc5c9577-ctldr"
	Nov 29 09:24:14 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:24:14.968817    1477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ctldr" podStartSLOduration=42.968796959 podStartE2EDuration="42.968796959s" podCreationTimestamp="2025-11-29 09:23:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:24:14.967862455 +0000 UTC m=+47.414982060" watchObservedRunningTime="2025-11-29 09:24:14.968796959 +0000 UTC m=+47.415916556"
	Nov 29 09:24:17 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:24:17.125059    1477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=43.125040242 podStartE2EDuration="43.125040242s" podCreationTimestamp="2025-11-29 09:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:24:15.107322701 +0000 UTC m=+47.554442306" watchObservedRunningTime="2025-11-29 09:24:17.125040242 +0000 UTC m=+49.572159855"
	Nov 29 09:24:17 default-k8s-diff-port-528769 kubelet[1477]: I1129 09:24:17.324372    1477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr65p\" (UniqueName: \"kubernetes.io/projected/6ddeb490-d6e5-43be-98f2-27affe7aebb7-kube-api-access-hr65p\") pod \"busybox\" (UID: \"6ddeb490-d6e5-43be-98f2-27affe7aebb7\") " pod="default/busybox"
	
	
	==> storage-provisioner [e065b1d7f32b727bba244d99ffad2350a5a573b263c5c91f7e6f3bbec7332107] <==
	I1129 09:24:14.909365       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:24:14.912998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:14.922741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:24:14.922972       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:24:14.928364       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-528769_5b0c6746-f6cc-4f27-bf66-1892fd10e14e!
	I1129 09:24:14.935354       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b4b92554-7b1e-407f-b41e-9009cdd5d295", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-528769_5b0c6746-f6cc-4f27-bf66-1892fd10e14e became leader
	W1129 09:24:14.946832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:14.972449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:24:15.034042       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-528769_5b0c6746-f6cc-4f27-bf66-1892fd10e14e!
	W1129 09:24:17.053099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:17.059979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:19.063274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:19.077415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:21.080865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:21.088326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:23.091976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:23.099748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:25.107647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:25.112735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:27.119633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:27.130335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:29.133105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:29.142048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:31.145494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:31.151136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-528769 -n default-k8s-diff-port-528769
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-528769 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (15.17s)

                                                
                                    

Test pass (299/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.86
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 5.17
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.21
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 173.32
29 TestAddons/serial/Volcano 41.71
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 8.91
35 TestAddons/parallel/Registry 17.21
36 TestAddons/parallel/RegistryCreds 0.77
37 TestAddons/parallel/Ingress 19.1
38 TestAddons/parallel/InspektorGadget 11.89
39 TestAddons/parallel/MetricsServer 5.85
41 TestAddons/parallel/CSI 63.38
42 TestAddons/parallel/Headlamp 18.23
43 TestAddons/parallel/CloudSpanner 6.66
44 TestAddons/parallel/LocalPath 52.57
45 TestAddons/parallel/NvidiaDevicePlugin 7.11
46 TestAddons/parallel/Yakd 11.83
48 TestAddons/StoppedEnableDisable 12.36
49 TestCertOptions 35.45
50 TestCertExpiration 225.12
52 TestForceSystemdFlag 36.39
53 TestForceSystemdEnv 37.91
54 TestDockerEnvContainerd 48.83
58 TestErrorSpam/setup 33.5
59 TestErrorSpam/start 0.83
60 TestErrorSpam/status 1.17
61 TestErrorSpam/pause 1.73
62 TestErrorSpam/unpause 1.88
63 TestErrorSpam/stop 1.61
66 TestFunctional/serial/CopySyncFile 0.01
67 TestFunctional/serial/StartWithProxy 79.23
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 7.92
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.47
75 TestFunctional/serial/CacheCmd/cache/add_local 1.23
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.9
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
83 TestFunctional/serial/ExtraConfig 40.86
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.51
86 TestFunctional/serial/LogsFileCmd 1.6
87 TestFunctional/serial/InvalidService 4.67
89 TestFunctional/parallel/ConfigCmd 0.49
90 TestFunctional/parallel/DashboardCmd 7.97
91 TestFunctional/parallel/DryRun 0.52
92 TestFunctional/parallel/InternationalLanguage 0.22
93 TestFunctional/parallel/StatusCmd 1.38
97 TestFunctional/parallel/ServiceCmdConnect 8.66
98 TestFunctional/parallel/AddonsCmd 0.18
99 TestFunctional/parallel/PersistentVolumeClaim 29.16
101 TestFunctional/parallel/SSHCmd 0.77
102 TestFunctional/parallel/CpCmd 2.29
104 TestFunctional/parallel/FileSync 0.35
105 TestFunctional/parallel/CertSync 2.1
109 TestFunctional/parallel/NodeLabels 0.1
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.75
113 TestFunctional/parallel/License 0.36
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.49
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 8.22
126 TestFunctional/parallel/ServiceCmd/List 0.53
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
129 TestFunctional/parallel/ServiceCmd/Format 0.41
130 TestFunctional/parallel/ServiceCmd/URL 0.39
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
132 TestFunctional/parallel/ProfileCmd/profile_list 0.66
133 TestFunctional/parallel/MountCmd/any-port 8.87
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
135 TestFunctional/parallel/MountCmd/specific-port 1.34
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.02
137 TestFunctional/parallel/Version/short 0.08
138 TestFunctional/parallel/Version/components 1.39
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
143 TestFunctional/parallel/ImageCommands/ImageBuild 6.09
144 TestFunctional/parallel/ImageCommands/Setup 1.13
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.38
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.17
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.44
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.63
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
155 TestFunctional/delete_echo-server_images 0.06
156 TestFunctional/delete_my-image_image 0.03
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 178.53
163 TestMultiControlPlane/serial/DeployApp 7.66
164 TestMultiControlPlane/serial/PingHostFromPods 1.66
165 TestMultiControlPlane/serial/AddWorkerNode 60.68
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.13
168 TestMultiControlPlane/serial/CopyFile 20.97
169 TestMultiControlPlane/serial/StopSecondaryNode 2.15
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.83
171 TestMultiControlPlane/serial/RestartSecondaryNode 13.44
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.51
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 87.05
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.36
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
176 TestMultiControlPlane/serial/StopCluster 36.47
177 TestMultiControlPlane/serial/RestartCluster 60.82
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.83
179 TestMultiControlPlane/serial/AddSecondaryNode 86.77
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.08
185 TestJSONOutput/start/Command 50.08
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.71
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.66
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.11
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 39.47
211 TestKicCustomNetwork/use_default_bridge_network 36.26
212 TestKicExistingNetwork 36.73
213 TestKicCustomSubnet 37.99
214 TestKicStaticIP 36.28
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 72.66
219 TestMountStart/serial/StartWithMountFirst 8.52
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 8.74
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.73
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.32
226 TestMountStart/serial/RestartStopped 7.44
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 108.16
231 TestMultiNode/serial/DeployApp2Nodes 4.95
232 TestMultiNode/serial/PingHostFrom2Pods 1.03
233 TestMultiNode/serial/AddNode 58.52
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.78
236 TestMultiNode/serial/CopyFile 10.46
237 TestMultiNode/serial/StopNode 2.38
238 TestMultiNode/serial/StartAfterStop 7.83
239 TestMultiNode/serial/RestartKeepsNodes 73.23
240 TestMultiNode/serial/DeleteNode 5.77
241 TestMultiNode/serial/StopMultiNode 24.15
242 TestMultiNode/serial/RestartMultiNode 57.27
243 TestMultiNode/serial/ValidateNameConflict 35.19
248 TestPreload 120.72
250 TestScheduledStopUnix 111.72
253 TestInsufficientStorage 13.15
254 TestRunningBinaryUpgrade 324.39
256 TestKubernetesUpgrade 361.91
257 TestMissingContainerUpgrade 134.58
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
260 TestNoKubernetes/serial/StartWithK8s 41.83
261 TestNoKubernetes/serial/StartWithStopK8s 25.83
262 TestNoKubernetes/serial/Start 7.57
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
265 TestNoKubernetes/serial/ProfileList 0.73
266 TestNoKubernetes/serial/Stop 1.29
267 TestNoKubernetes/serial/StartNoArgs 6.44
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
269 TestStoppedBinaryUpgrade/Setup 1.12
270 TestStoppedBinaryUpgrade/Upgrade 311.17
271 TestStoppedBinaryUpgrade/MinikubeLogs 2.54
280 TestPause/serial/Start 62.07
281 TestPause/serial/SecondStartNoReconfiguration 6.71
282 TestPause/serial/Pause 0.76
283 TestPause/serial/VerifyStatus 0.34
284 TestPause/serial/Unpause 0.62
285 TestPause/serial/PauseAgain 0.82
286 TestPause/serial/DeletePaused 3
287 TestPause/serial/VerifyDeletedResources 14.17
295 TestNetworkPlugins/group/false 3.8
300 TestStartStop/group/old-k8s-version/serial/FirstStart 63.64
302 TestStartStop/group/no-preload/serial/FirstStart 66.55
304 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.63
305 TestStartStop/group/old-k8s-version/serial/Stop 12.92
306 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
307 TestStartStop/group/old-k8s-version/serial/SecondStart 53.65
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.43
310 TestStartStop/group/no-preload/serial/Stop 12.54
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
312 TestStartStop/group/no-preload/serial/SecondStart 49.99
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.14
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.33
316 TestStartStop/group/old-k8s-version/serial/Pause 3.79
318 TestStartStop/group/embed-certs/serial/FirstStart 80.97
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
322 TestStartStop/group/no-preload/serial/Pause 3.63
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 80.01
326 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.08
327 TestStartStop/group/embed-certs/serial/Stop 12.19
328 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
329 TestStartStop/group/embed-certs/serial/SecondStart 54.59
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.25
332 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.3
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
334 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.15
335 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
336 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
337 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
338 TestStartStop/group/embed-certs/serial/Pause 3.32
340 TestStartStop/group/newest-cni/serial/FirstStart 39.4
341 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
343 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
344 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.88
345 TestNetworkPlugins/group/auto/Start 88.3
346 TestStartStop/group/newest-cni/serial/DeployApp 0
347 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.29
348 TestStartStop/group/newest-cni/serial/Stop 3.58
349 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
350 TestStartStop/group/newest-cni/serial/SecondStart 25.32
351 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.35
354 TestStartStop/group/newest-cni/serial/Pause 3.76
355 TestNetworkPlugins/group/kindnet/Start 82.19
356 TestNetworkPlugins/group/auto/KubeletFlags 0.35
357 TestNetworkPlugins/group/auto/NetCatPod 10.3
358 TestNetworkPlugins/group/auto/DNS 0.18
359 TestNetworkPlugins/group/auto/Localhost 0.19
360 TestNetworkPlugins/group/auto/HairPin 0.17
361 TestNetworkPlugins/group/calico/Start 60.94
362 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
363 TestNetworkPlugins/group/kindnet/KubeletFlags 0.51
364 TestNetworkPlugins/group/kindnet/NetCatPod 10.66
365 TestNetworkPlugins/group/kindnet/DNS 0.24
366 TestNetworkPlugins/group/kindnet/Localhost 0.3
367 TestNetworkPlugins/group/kindnet/HairPin 0.32
368 TestNetworkPlugins/group/custom-flannel/Start 63.19
369 TestNetworkPlugins/group/calico/ControllerPod 6.01
370 TestNetworkPlugins/group/calico/KubeletFlags 0.5
371 TestNetworkPlugins/group/calico/NetCatPod 11.41
372 TestNetworkPlugins/group/calico/DNS 0.26
373 TestNetworkPlugins/group/calico/Localhost 0.25
374 TestNetworkPlugins/group/calico/HairPin 0.27
375 TestNetworkPlugins/group/enable-default-cni/Start 86.14
376 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
377 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.35
378 TestNetworkPlugins/group/custom-flannel/DNS 0.24
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.27
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
381 TestNetworkPlugins/group/flannel/Start 63.75
382 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
383 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.33
384 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
385 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
387 TestNetworkPlugins/group/flannel/ControllerPod 6.01
388 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
389 TestNetworkPlugins/group/flannel/NetCatPod 10.4
390 TestNetworkPlugins/group/bridge/Start 81.85
391 TestNetworkPlugins/group/flannel/DNS 0.25
392 TestNetworkPlugins/group/flannel/Localhost 0.19
393 TestNetworkPlugins/group/flannel/HairPin 0.22
394 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
395 TestNetworkPlugins/group/bridge/NetCatPod 9.26
396 TestNetworkPlugins/group/bridge/DNS 0.18
397 TestNetworkPlugins/group/bridge/Localhost 0.15
398 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (5.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-845453 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-845453 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.863491134s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1129 08:28:40.742050    4137 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1129 08:28:40.742200    4137 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-845453
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-845453: exit status 85 (98.7811ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-845453 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-845453 │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 08:28:34
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 08:28:34.925052    4143 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:28:34.925183    4143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:28:34.925216    4143 out.go:374] Setting ErrFile to fd 2...
	I1129 08:28:34.925230    4143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:28:34.925480    4143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
	W1129 08:28:34.925617    4143 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22000-2317/.minikube/config/config.json: open /home/jenkins/minikube-integration/22000-2317/.minikube/config/config.json: no such file or directory
	I1129 08:28:34.926008    4143 out.go:368] Setting JSON to true
	I1129 08:28:34.926782    4143 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":666,"bootTime":1764404249,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1129 08:28:34.926851    4143 start.go:143] virtualization:  
	I1129 08:28:34.932274    4143 out.go:99] [download-only-845453] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1129 08:28:34.932449    4143 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball: no such file or directory
	I1129 08:28:34.932535    4143 notify.go:221] Checking for updates...
	I1129 08:28:34.935718    4143 out.go:171] MINIKUBE_LOCATION=22000
	I1129 08:28:34.939336    4143 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 08:28:34.942492    4143 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 08:28:34.945605    4143 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube
	I1129 08:28:34.948687    4143 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1129 08:28:34.954638    4143 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1129 08:28:34.954916    4143 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 08:28:34.981330    4143 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 08:28:34.981492    4143 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:28:35.398329    4143 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-29 08:28:35.388918898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 08:28:35.398435    4143 docker.go:319] overlay module found
	I1129 08:28:35.401506    4143 out.go:99] Using the docker driver based on user configuration
	I1129 08:28:35.401544    4143 start.go:309] selected driver: docker
	I1129 08:28:35.401551    4143 start.go:927] validating driver "docker" against <nil>
	I1129 08:28:35.401646    4143 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:28:35.460175    4143 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-29 08:28:35.451534166 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 08:28:35.460342    4143 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 08:28:35.460677    4143 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1129 08:28:35.460850    4143 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1129 08:28:35.463912    4143 out.go:171] Using Docker driver with root privileges
	I1129 08:28:35.466853    4143 cni.go:84] Creating CNI manager for ""
	I1129 08:28:35.466928    4143 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 08:28:35.466943    4143 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 08:28:35.467022    4143 start.go:353] cluster config:
	{Name:download-only-845453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-845453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 08:28:35.470086    4143 out.go:99] Starting "download-only-845453" primary control-plane node in "download-only-845453" cluster
	I1129 08:28:35.470106    4143 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1129 08:28:35.472994    4143 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1129 08:28:35.473038    4143 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1129 08:28:35.473189    4143 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 08:28:35.488823    4143 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1129 08:28:35.489022    4143 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1129 08:28:35.489123    4143 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1129 08:28:35.528538    4143 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1129 08:28:35.528565    4143 cache.go:65] Caching tarball of preloaded images
	I1129 08:28:35.528773    4143 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1129 08:28:35.532071    4143 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1129 08:28:35.532139    4143 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1129 08:28:35.614744    4143 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1129 08:28:35.614867    4143 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-845453 host does not exist
	  To start a cluster, run: "minikube start -p download-only-845453"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-845453
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (5.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-866418 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-866418 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.169181281s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (5.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1129 08:28:46.383388    4137 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1129 08:28:46.383436    4137 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-866418
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-866418: exit status 85 (212.0101ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-845453 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-845453 │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │ 29 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-845453                                                                                                                                                               │ download-only-845453 │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │ 29 Nov 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-866418 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-866418 │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 08:28:41
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 08:28:41.258822    4344 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:28:41.258939    4344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:28:41.258944    4344 out.go:374] Setting ErrFile to fd 2...
	I1129 08:28:41.258961    4344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:28:41.259209    4344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
	I1129 08:28:41.259599    4344 out.go:368] Setting JSON to true
	I1129 08:28:41.260373    4344 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":672,"bootTime":1764404249,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1129 08:28:41.260442    4344 start.go:143] virtualization:  
	I1129 08:28:41.263828    4344 out.go:99] [download-only-866418] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 08:28:41.264102    4344 notify.go:221] Checking for updates...
	I1129 08:28:41.266946    4344 out.go:171] MINIKUBE_LOCATION=22000
	I1129 08:28:41.269986    4344 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 08:28:41.272752    4344 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 08:28:41.275619    4344 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube
	I1129 08:28:41.278647    4344 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1129 08:28:41.284450    4344 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1129 08:28:41.284827    4344 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 08:28:41.318539    4344 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 08:28:41.318652    4344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:28:41.382950    4344 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-29 08:28:41.374035844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 08:28:41.383062    4344 docker.go:319] overlay module found
	I1129 08:28:41.386162    4344 out.go:99] Using the docker driver based on user configuration
	I1129 08:28:41.386201    4344 start.go:309] selected driver: docker
	I1129 08:28:41.386209    4344 start.go:927] validating driver "docker" against <nil>
	I1129 08:28:41.386322    4344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:28:41.450181    4344 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-29 08:28:41.441372381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 08:28:41.450341    4344 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 08:28:41.450598    4344 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1129 08:28:41.450741    4344 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1129 08:28:41.453844    4344 out.go:171] Using Docker driver with root privileges
	I1129 08:28:41.456719    4344 cni.go:84] Creating CNI manager for ""
	I1129 08:28:41.456786    4344 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 08:28:41.456799    4344 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 08:28:41.456876    4344 start.go:353] cluster config:
	{Name:download-only-866418 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-866418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 08:28:41.459904    4344 out.go:99] Starting "download-only-866418" primary control-plane node in "download-only-866418" cluster
	I1129 08:28:41.459930    4344 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1129 08:28:41.462746    4344 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1129 08:28:41.462791    4344 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 08:28:41.462956    4344 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 08:28:41.478881    4344 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1129 08:28:41.479008    4344 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1129 08:28:41.479032    4344 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1129 08:28:41.479038    4344 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1129 08:28:41.479048    4344 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1129 08:28:41.526073    4344 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1129 08:28:41.526101    4344 cache.go:65] Caching tarball of preloaded images
	I1129 08:28:41.526269    4344 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 08:28:41.529314    4344 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1129 08:28:41.529345    4344 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1129 08:28:41.619250    4344 preload.go:295] Got checksum from GCS API "435977642a202d20ca04f26d87d875a8"
	I1129 08:28:41.619301    4344 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:435977642a202d20ca04f26d87d875a8 -> /home/jenkins/minikube-integration/22000-2317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1129 08:28:45.752228    4344 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1129 08:28:45.752647    4344 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/download-only-866418/config.json ...
	I1129 08:28:45.752680    4344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/download-only-866418/config.json: {Name:mka9f0f4a02c36e082826ec74479d18f87722746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:45.752871    4344 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 08:28:45.753030    4344 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22000-2317/.minikube/cache/linux/arm64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-866418 host does not exist
	  To start a cluster, run: "minikube start -p download-only-866418"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-866418
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1129 08:28:47.670879    4137 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-448605 --alsologtostderr --binary-mirror http://127.0.0.1:38467 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-448605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-448605
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-021028
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-021028: exit status 85 (72.040234ms)

                                                
                                                
-- stdout --
	* Profile "addons-021028" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-021028"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-021028
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-021028: exit status 85 (68.535924ms)

                                                
                                                
-- stdout --
	* Profile "addons-021028" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-021028"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (173.32s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-021028 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-021028 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m53.315186548s)
--- PASS: TestAddons/Setup (173.32s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.71s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 72.631073ms
addons_test.go:868: volcano-scheduler stabilized in 72.968347ms
addons_test.go:876: volcano-admission stabilized in 73.159347ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-q4cj5" [1b2fa18d-5d93-4c5b-8611-87ba72f9669e] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003140148s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-fw692" [7b6214bf-d00c-44b0-a983-0e6505c27e40] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004336363s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-pxv4t" [ccbeacd4-546f-4c01-bc1a-6c3255505f20] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003563442s
addons_test.go:903: (dbg) Run:  kubectl --context addons-021028 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-021028 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-021028 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [e261652c-322d-46fe-b3cf-17c17b91095e] Pending
helpers_test.go:352: "test-job-nginx-0" [e261652c-322d-46fe-b3cf-17c17b91095e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [e261652c-322d-46fe-b3cf-17c17b91095e] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003825361s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-021028 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-021028 addons disable volcano --alsologtostderr -v=1: (12.001590183s)
--- PASS: TestAddons/serial/Volcano (41.71s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-021028 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-021028 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.91s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-021028 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-021028 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8f93960e-595e-44c7-bc76-13b4ac30a319] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8f93960e-595e-44c7-bc76-13b4ac30a319] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003840423s
addons_test.go:694: (dbg) Run:  kubectl --context addons-021028 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-021028 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-021028 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-021028 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.91s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.799117ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-p4kxx" [b3ad1918-9273-4563-bb76-ac92574c0282] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003995168s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-l6rrw" [0a4ab5b7-e70d-41f1-9fce-3f7a616eb8ea] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003430698s
addons_test.go:392: (dbg) Run:  kubectl --context addons-021028 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-021028 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-021028 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.123347308s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-021028 ip
2025/11/29 08:32:58 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-021028 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.21s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.77s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.798208ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-021028
addons_test.go:332: (dbg) Run:  kubectl --context addons-021028 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-021028 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.77s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-021028 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-021028 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-021028 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [c93ab904-5d86-4854-9061-07582bc1900f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [c93ab904-5d86-4854-9061-07582bc1900f] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003843082s
I1129 08:34:17.848065    4137 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-021028 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-021028 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-021028 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-021028 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-021028 addons disable ingress-dns --alsologtostderr -v=1: (1.276791231s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-021028 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-021028 addons disable ingress --alsologtostderr -v=1: (8.04980387s)
--- PASS: TestAddons/parallel/Ingress (19.10s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-kvd7r" [eff6ee31-cd0d-4e06-9589-69bedc2ac7a3] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003123326s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-021028 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-021028 addons disable inspektor-gadget --alsologtostderr -v=1: (5.882300071s)
--- PASS: TestAddons/parallel/InspektorGadget (11.89s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.761171ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-xzwqc" [38f96d17-a354-4caf-adff-a69327739a06] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003574129s
addons_test.go:463: (dbg) Run:  kubectl --context addons-021028 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-021028 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1129 08:33:25.470778    4137 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1129 08:33:25.474528    4137 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1129 08:33:25.474560    4137 kapi.go:107] duration metric: took 7.881714ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.892495ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-021028 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-021028 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [a7b89cf8-35b8-455b-835b-b5c4e01efb45] Pending
helpers_test.go:352: "task-pv-pod" [a7b89cf8-35b8-455b-835b-b5c4e01efb45] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [a7b89cf8-35b8-455b-835b-b5c4e01efb45] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003262966s
addons_test.go:572: (dbg) Run:  kubectl --context addons-021028 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-021028 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-021028 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-021028 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-021028 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-021028 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-021028 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [7e0a6745-cab5-49c0-bf3f-ddca6f50f5dd] Pending
helpers_test.go:352: "task-pv-pod-restore" [7e0a6745-cab5-49c0-bf3f-ddca6f50f5dd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [7e0a6745-cab5-49c0-bf3f-ddca6f50f5dd] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003450542s
addons_test.go:614: (dbg) Run:  kubectl --context addons-021028 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-021028 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-021028 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-021028 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-021028 addons disable volumesnapshots --alsologtostderr -v=1: (1.197894917s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-021028 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-021028 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.892379302s)
--- PASS: TestAddons/parallel/CSI (63.38s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-021028 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-021028 --alsologtostderr -v=1: (1.36447212s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-bzfp7" [9d546411-9505-4379-bcbe-703fae33c970] Pending
helpers_test.go:352: "headlamp-dfcdc64b-bzfp7" [9d546411-9505-4379-bcbe-703fae33c970] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-bzfp7" [9d546411-9505-4379-bcbe-703fae33c970] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003843066s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-021028 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-021028 addons disable headlamp --alsologtostderr -v=1: (5.858322011s)
--- PASS: TestAddons/parallel/Headlamp (18.23s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-sx9hq" [990a7b68-8901-4095-9ed2-44b343b16f9c] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003284787s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-021028 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.66s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.57s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-021028 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-021028 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-021028 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [57e4ea22-9528-4d68-b9d7-20f865847d75] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [57e4ea22-9528-4d68-b9d7-20f865847d75] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [57e4ea22-9528-4d68-b9d7-20f865847d75] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00403951s
addons_test.go:967: (dbg) Run:  kubectl --context addons-021028 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-021028 ssh "cat /opt/local-path-provisioner/pvc-2d4273f5-bbba-4354-9edf-3ca04db5494a_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-021028 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-021028 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-021028 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-021028 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.13242644s)
--- PASS: TestAddons/parallel/LocalPath (52.57s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.11s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-ddl5t" [c1641999-e515-4b9d-8241-608f328d067a] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.010405989s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-021028 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-021028 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.09505753s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.11s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-jqkms" [2b02297e-4b75-4b7e-ad37-5cf208f706fe] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003953051s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-021028 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-021028 addons disable yakd --alsologtostderr -v=1: (5.824852956s)
--- PASS: TestAddons/parallel/Yakd (11.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-021028
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-021028: (12.067199318s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-021028
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-021028
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-021028
--- PASS: TestAddons/StoppedEnableDisable (12.36s)

                                                
                                    
x
+
TestCertOptions (35.45s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-515442 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-515442 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (32.571749351s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-515442 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-515442 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-515442 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-515442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-515442
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-515442: (2.139873978s)
--- PASS: TestCertOptions (35.45s)

                                                
                                    
x
+
TestCertExpiration (225.12s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-592440 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E1129 09:16:41.708985    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:16:50.642509    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-592440 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (34.738606639s)
E1129 09:18:47.573526    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-592440 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-592440 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.827609892s)
helpers_test.go:175: Cleaning up "cert-expiration-592440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-592440
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-592440: (2.553333212s)
--- PASS: TestCertExpiration (225.12s)

                                                
                                    
x
+
TestForceSystemdFlag (36.39s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-908730 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-908730 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.719116423s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-908730 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-908730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-908730
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-908730: (2.260037514s)
--- PASS: TestForceSystemdFlag (36.39s)

                                                
                                    
x
+
TestForceSystemdEnv (37.91s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-559836 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-559836 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (35.188074421s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-559836 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-559836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-559836
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-559836: (2.273596667s)
--- PASS: TestForceSystemdEnv (37.91s)

                                                
                                    
x
+
TestDockerEnvContainerd (48.83s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-015435 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-015435 --driver=docker  --container-runtime=containerd: (32.996619308s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-015435"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-015435": (1.138599506s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-rcxy5B6pRSJk/agent.23814" SSH_AGENT_PID="23815" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-rcxy5B6pRSJk/agent.23814" SSH_AGENT_PID="23815" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-rcxy5B6pRSJk/agent.23814" SSH_AGENT_PID="23815" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.221719139s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-rcxy5B6pRSJk/agent.23814" SSH_AGENT_PID="23815" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-015435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-015435
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-015435: (2.038040017s)
--- PASS: TestDockerEnvContainerd (48.83s)

                                                
                                    
x
+
TestErrorSpam/setup (33.5s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-803916 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-803916 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-803916 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-803916 --driver=docker  --container-runtime=containerd: (33.500454792s)
--- PASS: TestErrorSpam/setup (33.50s)

                                                
                                    
x
+
TestErrorSpam/start (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-803916 --log_dir /tmp/nospam-803916 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-803916 --log_dir /tmp/nospam-803916 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-803916 --log_dir /tmp/nospam-803916 start --dry-run
--- PASS: TestErrorSpam/start (0.83s)

                                                
                                    
x
+
TestErrorSpam/status (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-803916 --log_dir /tmp/nospam-803916 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-803916 --log_dir /tmp/nospam-803916 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-803916 --log_dir /tmp/nospam-803916 status
--- PASS: TestErrorSpam/status (1.17s)

                                                
                                    
x
+
TestErrorSpam/pause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-803916 --log_dir /tmp/nospam-803916 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-803916 --log_dir /tmp/nospam-803916 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-803916 --log_dir /tmp/nospam-803916 pause
--- PASS: TestErrorSpam/pause (1.73s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.88s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-803916 --log_dir /tmp/nospam-803916 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-803916 --log_dir /tmp/nospam-803916 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-803916 --log_dir /tmp/nospam-803916 unpause
--- PASS: TestErrorSpam/unpause (1.88s)

                                                
                                    
x
+
TestErrorSpam/stop (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-803916 --log_dir /tmp/nospam-803916 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-803916 --log_dir /tmp/nospam-803916 stop: (1.400529004s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-803916 --log_dir /tmp/nospam-803916 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-803916 --log_dir /tmp/nospam-803916 stop
--- PASS: TestErrorSpam/stop (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22000-2317/.minikube/files/etc/test/nested/copy/4137/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.23s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-378174 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1129 08:36:41.714272    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:41.721183    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:41.732548    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:41.754079    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:41.795442    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:41.876915    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:42.038424    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:42.360076    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:43.001833    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:44.283150    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:46.845902    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:51.967471    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:37:02.209965    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:37:22.691466    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-378174 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m19.225464021s)
--- PASS: TestFunctional/serial/StartWithProxy (79.23s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.92s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1129 08:37:41.416422    4137 config.go:182] Loaded profile config "functional-378174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-378174 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-378174 --alsologtostderr -v=8: (7.916450728s)
functional_test.go:678: soft start took 7.918104269s for "functional-378174" cluster.
I1129 08:37:49.333230    4137 config.go:182] Loaded profile config "functional-378174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (7.92s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-378174 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-378174 cache add registry.k8s.io/pause:3.1: (1.29156432s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-378174 cache add registry.k8s.io/pause:3.3: (1.147367852s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-378174 cache add registry.k8s.io/pause:latest: (1.033055844s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-378174 /tmp/TestFunctionalserialCacheCmdcacheadd_local700563915/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 cache add minikube-local-cache-test:functional-378174
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 cache delete minikube-local-cache-test:functional-378174
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-378174
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-378174 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (308.915973ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 kubectl -- --context functional-378174 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-378174 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.86s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-378174 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1129 08:38:03.653475    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-378174 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.855137587s)
functional_test.go:776: restart took 40.855237807s for "functional-378174" cluster.
I1129 08:38:37.773390    4137 config.go:182] Loaded profile config "functional-378174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (40.86s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-378174 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-378174 logs: (1.50588766s)
--- PASS: TestFunctional/serial/LogsCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 logs --file /tmp/TestFunctionalserialLogsFileCmd538381840/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-378174 logs --file /tmp/TestFunctionalserialLogsFileCmd538381840/001/logs.txt: (1.59891777s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.60s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.67s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-378174 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-378174
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-378174: exit status 115 (424.323044ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30505 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-378174 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.67s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-378174 config get cpus: exit status 14 (73.451865ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-378174 config get cpus: exit status 14 (69.17302ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-378174 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-378174 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 38861: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.97s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-378174 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-378174 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (249.535575ms)

                                                
                                                
-- stdout --
	* [functional-378174] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:39:19.850154   38469 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:39:19.850266   38469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:39:19.850282   38469 out.go:374] Setting ErrFile to fd 2...
	I1129 08:39:19.850287   38469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:39:19.850524   38469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
	I1129 08:39:19.850847   38469 out.go:368] Setting JSON to false
	I1129 08:39:19.851779   38469 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1311,"bootTime":1764404249,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1129 08:39:19.851845   38469 start.go:143] virtualization:  
	I1129 08:39:19.855683   38469 out.go:179] * [functional-378174] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 08:39:19.860763   38469 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 08:39:19.863694   38469 notify.go:221] Checking for updates...
	I1129 08:39:19.867006   38469 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 08:39:19.870255   38469 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 08:39:19.873837   38469 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube
	I1129 08:39:19.877077   38469 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 08:39:19.880837   38469 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 08:39:19.884884   38469 config.go:182] Loaded profile config "functional-378174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 08:39:19.885496   38469 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 08:39:19.934444   38469 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 08:39:19.934598   38469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:39:20.033393   38469 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-29 08:39:20.007252808 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 08:39:20.033507   38469 docker.go:319] overlay module found
	I1129 08:39:20.037068   38469 out.go:179] * Using the docker driver based on existing profile
	I1129 08:39:20.040131   38469 start.go:309] selected driver: docker
	I1129 08:39:20.040160   38469 start.go:927] validating driver "docker" against &{Name:functional-378174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-378174 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 08:39:20.040280   38469 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 08:39:20.044064   38469 out.go:203] 
	W1129 08:39:20.047123   38469 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1129 08:39:20.050225   38469 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-378174 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-378174 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-378174 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (222.238977ms)

                                                
                                                
-- stdout --
	* [functional-378174] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:39:19.643709   38422 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:39:19.643837   38422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:39:19.643848   38422 out.go:374] Setting ErrFile to fd 2...
	I1129 08:39:19.643854   38422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:39:19.644861   38422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
	I1129 08:39:19.645286   38422 out.go:368] Setting JSON to false
	I1129 08:39:19.646201   38422 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1311,"bootTime":1764404249,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1129 08:39:19.646275   38422 start.go:143] virtualization:  
	I1129 08:39:19.649923   38422 out.go:179] * [functional-378174] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1129 08:39:19.652929   38422 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 08:39:19.652994   38422 notify.go:221] Checking for updates...
	I1129 08:39:19.659646   38422 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 08:39:19.662621   38422 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 08:39:19.665656   38422 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube
	I1129 08:39:19.668538   38422 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 08:39:19.671608   38422 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 08:39:19.674816   38422 config.go:182] Loaded profile config "functional-378174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 08:39:19.675369   38422 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 08:39:19.701864   38422 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 08:39:19.701974   38422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:39:19.785767   38422 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-29 08:39:19.775572884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 08:39:19.785881   38422 docker.go:319] overlay module found
	I1129 08:39:19.789261   38422 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1129 08:39:19.792219   38422 start.go:309] selected driver: docker
	I1129 08:39:19.792243   38422 start.go:927] validating driver "docker" against &{Name:functional-378174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-378174 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 08:39:19.792365   38422 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 08:39:19.795950   38422 out.go:203] 
	W1129 08:39:19.798933   38422 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1129 08:39:19.801881   38422 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-378174 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-378174 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-pjpj5" [e3f39f82-66e4-4126-95b5-20c526f8a499] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-pjpj5" [e3f39f82-66e4-4126-95b5-20c526f8a499] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003829096s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30146
functional_test.go:1680: http://192.168.49.2:30146: success! body:
Request served by hello-node-connect-7d85dfc575-pjpj5

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30146
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [c9740d80-07fe-4e62-afa5-ba80faa6acd9] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004145052s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-378174 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-378174 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-378174 get pvc myclaim -o=json
I1129 08:38:54.285654    4137 retry.go:31] will retry after 2.911829551s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:dc09141a-9f3e-484b-997e-4bfd639bc0cd ResourceVersion:648 Generation:0 CreationTimestamp:2025-11-29 08:38:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0x40016bcc00 VolumeMode:0x40016bcc10 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-378174 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-378174 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [50fef749-8eb1-4692-9a2c-5293bb197933] Pending
helpers_test.go:352: "sp-pod" [50fef749-8eb1-4692-9a2c-5293bb197933] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [50fef749-8eb1-4692-9a2c-5293bb197933] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003586391s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-378174 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-378174 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-378174 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [354f94f5-46ae-4c0c-b7c6-d1f2185b242d] Pending
helpers_test.go:352: "sp-pod" [354f94f5-46ae-4c0c-b7c6-d1f2185b242d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004102326s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-378174 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh -n functional-378174 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 cp functional-378174:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2698946639/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh -n functional-378174 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh -n functional-378174 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4137/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh "sudo cat /etc/test/nested/copy/4137/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4137.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh "sudo cat /etc/ssl/certs/4137.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4137.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh "sudo cat /usr/share/ca-certificates/4137.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41372.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh "sudo cat /etc/ssl/certs/41372.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41372.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh "sudo cat /usr/share/ca-certificates/41372.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-378174 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-378174 ssh "sudo systemctl is-active docker": exit status 1 (387.213616ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-378174 ssh "sudo systemctl is-active crio": exit status 1 (362.498266ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-378174 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-378174 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-378174 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 36143: os: process already finished
helpers_test.go:519: unable to terminate pid 35929: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-378174 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-378174 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-378174 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [fb2e1c1c-31f5-44e9-89b3-3dbc8679e4bd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [fb2e1c1c-31f5-44e9-89b3-3dbc8679e4bd] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003196823s
I1129 08:38:57.057171    4137 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-378174 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.174.62 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-378174 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-378174 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-378174 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-nrd82" [dd36eea9-c20b-4814-b123-65be41f355e7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-nrd82" [dd36eea9-c20b-4814-b123-65be41f355e7] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003487806s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 service list -o json
functional_test.go:1504: Took "538.404049ms" to run "out/minikube-linux-arm64 -p functional-378174 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30296
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30296
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "601.338075ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "60.635577ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-378174 /tmp/TestFunctionalparallelMountCmdany-port3948646500/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764405557096838373" to /tmp/TestFunctionalparallelMountCmdany-port3948646500/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764405557096838373" to /tmp/TestFunctionalparallelMountCmdany-port3948646500/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764405557096838373" to /tmp/TestFunctionalparallelMountCmdany-port3948646500/001/test-1764405557096838373
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-378174 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (466.597934ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1129 08:39:17.565104    4137 retry.go:31] will retry after 585.470656ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 29 08:39 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 29 08:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 29 08:39 test-1764405557096838373
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh cat /mount-9p/test-1764405557096838373
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-378174 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [77e4e93f-4b9d-41ff-89b1-95aaf59f9729] Pending
helpers_test.go:352: "busybox-mount" [77e4e93f-4b9d-41ff-89b1-95aaf59f9729] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [77e4e93f-4b9d-41ff-89b1-95aaf59f9729] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [77e4e93f-4b9d-41ff-89b1-95aaf59f9729] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003358694s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-378174 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh "sudo umount -f /mount-9p"
E1129 08:39:25.574846    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-378174 /tmp/TestFunctionalparallelMountCmdany-port3948646500/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.87s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "373.203026ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "81.059571ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-378174 /tmp/TestFunctionalparallelMountCmdspecific-port882338712/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-378174 /tmp/TestFunctionalparallelMountCmdspecific-port882338712/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-378174 ssh "sudo umount -f /mount-9p": exit status 1 (276.167759ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-378174 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-378174 /tmp/TestFunctionalparallelMountCmdspecific-port882338712/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-378174 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2648627113/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-378174 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2648627113/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-378174 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2648627113/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-378174 ssh "findmnt -T" /mount1: exit status 1 (615.495174ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1129 08:39:27.933740    4137 retry.go:31] will retry after 250.529836ms: exit status 1
2025/11/29 08:39:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-378174 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-378174 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2648627113/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-378174 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2648627113/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-378174 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2648627113/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-378174 version -o=json --components: (1.390436924s)
--- PASS: TestFunctional/parallel/Version/components (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-378174 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-378174
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-378174
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-378174 image ls --format short --alsologtostderr:
I1129 08:39:36.385713   41604 out.go:360] Setting OutFile to fd 1 ...
I1129 08:39:36.386286   41604 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:39:36.386324   41604 out.go:374] Setting ErrFile to fd 2...
I1129 08:39:36.386349   41604 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:39:36.386646   41604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
I1129 08:39:36.387316   41604 config.go:182] Loaded profile config "functional-378174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1129 08:39:36.387483   41604 config.go:182] Loaded profile config "functional-378174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1129 08:39:36.388049   41604 cli_runner.go:164] Run: docker container inspect functional-378174 --format={{.State.Status}}
I1129 08:39:36.418316   41604 ssh_runner.go:195] Run: systemctl --version
I1129 08:39:36.418371   41604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-378174
I1129 08:39:36.444052   41604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/functional-378174/id_rsa Username:docker}
I1129 08:39:36.557106   41604 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-378174 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/minikube-local-cache-test │ functional-378174  │ sha256:97b284 │ 993B   │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ docker.io/library/nginx                     │ alpine             │ sha256:cbad63 │ 23.1MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:a18947 │ 98.2MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:7eb2c6 │ 20.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:05baa9 │ 22.8MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:b5f57e │ 15.8MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ docker.io/kicbase/echo-server               │ functional-378174  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/nginx                     │ latest             │ sha256:bb747c │ 58.3MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:43911e │ 24.6MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-378174 image ls --format table --alsologtostderr:
I1129 08:39:36.678668   41681 out.go:360] Setting OutFile to fd 1 ...
I1129 08:39:36.678774   41681 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:39:36.678783   41681 out.go:374] Setting ErrFile to fd 2...
I1129 08:39:36.678790   41681 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:39:36.679043   41681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
I1129 08:39:36.679595   41681 config.go:182] Loaded profile config "functional-378174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1129 08:39:36.679749   41681 config.go:182] Loaded profile config "functional-378174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1129 08:39:36.680300   41681 cli_runner.go:164] Run: docker container inspect functional-378174 --format={{.State.Status}}
I1129 08:39:36.698445   41681 ssh_runner.go:195] Run: systemctl --version
I1129 08:39:36.698501   41681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-378174
I1129 08:39:36.724755   41681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/functional-378174/id_rsa Username:docker}
I1129 08:39:36.831212   41681 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-378174 image ls --format json --alsologtostderr:
[{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"15779817"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDige
sts":[],"repoTags":["docker.io/kicbase/echo-server:functional-378174"],"size":"2173567"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"24571109"},{"id":"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"20720058"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],
"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"23117513"},{"id":"sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"58263548"},{"id":"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec969
76a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"98207481"},{"id":"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"22788047"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:97b28436ae8cf6ada24f2133abf8f43b1c70f3c2843caf49806bb317cc57b63c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-378174"],"size":"993"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["
gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-378174 image ls --format json --alsologtostderr:
I1129 08:39:36.678221   41676 out.go:360] Setting OutFile to fd 1 ...
I1129 08:39:36.678416   41676 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:39:36.678443   41676 out.go:374] Setting ErrFile to fd 2...
I1129 08:39:36.678463   41676 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:39:36.678859   41676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
I1129 08:39:36.679593   41676 config.go:182] Loaded profile config "functional-378174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1129 08:39:36.679794   41676 config.go:182] Loaded profile config "functional-378174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1129 08:39:36.680300   41676 cli_runner.go:164] Run: docker container inspect functional-378174 --format={{.State.Status}}
I1129 08:39:36.700250   41676 ssh_runner.go:195] Run: systemctl --version
I1129 08:39:36.700301   41676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-378174
I1129 08:39:36.719168   41676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/functional-378174/id_rsa Username:docker}
I1129 08:39:36.823491   41676 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-378174 image ls --format yaml --alsologtostderr:
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:97b28436ae8cf6ada24f2133abf8f43b1c70f3c2843caf49806bb317cc57b63c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-378174
size: "993"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-378174
size: "2173567"
- id: sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "58263548"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "98207481"
- id: sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "24571109"
- id: sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "20720058"
- id: sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "22788047"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "15779817"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "23117513"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-378174 image ls --format yaml --alsologtostderr:
I1129 08:39:36.381357   41603 out.go:360] Setting OutFile to fd 1 ...
I1129 08:39:36.381704   41603 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:39:36.381713   41603 out.go:374] Setting ErrFile to fd 2...
I1129 08:39:36.381725   41603 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:39:36.382198   41603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
I1129 08:39:36.382843   41603 config.go:182] Loaded profile config "functional-378174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1129 08:39:36.382951   41603 config.go:182] Loaded profile config "functional-378174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1129 08:39:36.383457   41603 cli_runner.go:164] Run: docker container inspect functional-378174 --format={{.State.Status}}
I1129 08:39:36.401951   41603 ssh_runner.go:195] Run: systemctl --version
I1129 08:39:36.402027   41603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-378174
I1129 08:39:36.434575   41603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/functional-378174/id_rsa Username:docker}
I1129 08:39:36.555367   41603 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-378174 ssh pgrep buildkitd: exit status 1 (280.877625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 image build -t localhost/my-image:functional-378174 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-378174 image build -t localhost/my-image:functional-378174 testdata/build --alsologtostderr: (5.579533559s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-378174 image build -t localhost/my-image:functional-378174 testdata/build --alsologtostderr:
I1129 08:39:37.219623   41807 out.go:360] Setting OutFile to fd 1 ...
I1129 08:39:37.219865   41807 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:39:37.219898   41807 out.go:374] Setting ErrFile to fd 2...
I1129 08:39:37.219919   41807 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:39:37.220393   41807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
I1129 08:39:37.222088   41807 config.go:182] Loaded profile config "functional-378174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1129 08:39:37.224680   41807 config.go:182] Loaded profile config "functional-378174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1129 08:39:37.225252   41807 cli_runner.go:164] Run: docker container inspect functional-378174 --format={{.State.Status}}
I1129 08:39:37.243074   41807 ssh_runner.go:195] Run: systemctl --version
I1129 08:39:37.243132   41807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-378174
I1129 08:39:37.261060   41807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/functional-378174/id_rsa Username:docker}
I1129 08:39:37.367559   41807 build_images.go:162] Building image from path: /tmp/build.1779813657.tar
I1129 08:39:37.367674   41807 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1129 08:39:37.375696   41807 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1779813657.tar
I1129 08:39:37.379429   41807 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1779813657.tar: stat -c "%s %y" /var/lib/minikube/build/build.1779813657.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1779813657.tar': No such file or directory
I1129 08:39:37.379462   41807 ssh_runner.go:362] scp /tmp/build.1779813657.tar --> /var/lib/minikube/build/build.1779813657.tar (3072 bytes)
I1129 08:39:37.397607   41807 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1779813657
I1129 08:39:37.405862   41807 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1779813657 -xf /var/lib/minikube/build/build.1779813657.tar
I1129 08:39:37.413999   41807 containerd.go:394] Building image: /var/lib/minikube/build/build.1779813657
I1129 08:39:37.414082   41807 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1779813657 --local dockerfile=/var/lib/minikube/build/build.1779813657 --output type=image,name=localhost/my-image:functional-378174
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 3.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 1.1s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 1.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:970e4d069be566abe1b43e0826b31340b25f04fe9809f313ed086206e273b7e0
#8 exporting manifest sha256:970e4d069be566abe1b43e0826b31340b25f04fe9809f313ed086206e273b7e0 0.0s done
#8 exporting config sha256:15db2d3477de06df47def82db1e40b0750fd07b7bd69d20de00f362710836ed3 0.0s done
#8 naming to localhost/my-image:functional-378174 done
#8 DONE 0.2s
I1129 08:39:42.718896   41807 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1779813657 --local dockerfile=/var/lib/minikube/build/build.1779813657 --output type=image,name=localhost/my-image:functional-378174: (5.304784016s)
I1129 08:39:42.718974   41807 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1779813657
I1129 08:39:42.728137   41807 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1779813657.tar
I1129 08:39:42.736477   41807 build_images.go:218] Built localhost/my-image:functional-378174 from /tmp/build.1779813657.tar
I1129 08:39:42.736506   41807 build_images.go:134] succeeded building to: functional-378174
I1129 08:39:42.736512   41807 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.096541991s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-378174
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 image load --daemon kicbase/echo-server:functional-378174 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-378174 image load --daemon kicbase/echo-server:functional-378174 --alsologtostderr: (1.088634322s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 image load --daemon kicbase/echo-server:functional-378174 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-378174
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 image load --daemon kicbase/echo-server:functional-378174 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 image save kicbase/echo-server:functional-378174 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 image rm kicbase/echo-server:functional-378174 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-378174
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-378174 image save --daemon kicbase/echo-server:functional-378174 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-378174
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-378174
--- PASS: TestFunctional/delete_echo-server_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-378174
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-378174
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (178.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1129 08:41:41.709036    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:42:09.417182    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-866628 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m57.570569213s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (178.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-866628 kubectl -- rollout status deployment/busybox: (4.797929661s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 kubectl -- exec busybox-7b57f96db7-rzw7b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 kubectl -- exec busybox-7b57f96db7-v5gt2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 kubectl -- exec busybox-7b57f96db7-v9ms2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 kubectl -- exec busybox-7b57f96db7-rzw7b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 kubectl -- exec busybox-7b57f96db7-v5gt2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 kubectl -- exec busybox-7b57f96db7-v9ms2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 kubectl -- exec busybox-7b57f96db7-rzw7b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 kubectl -- exec busybox-7b57f96db7-v5gt2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 kubectl -- exec busybox-7b57f96db7-v9ms2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 kubectl -- exec busybox-7b57f96db7-rzw7b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 kubectl -- exec busybox-7b57f96db7-rzw7b -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 kubectl -- exec busybox-7b57f96db7-v5gt2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 kubectl -- exec busybox-7b57f96db7-v5gt2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 kubectl -- exec busybox-7b57f96db7-v9ms2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 kubectl -- exec busybox-7b57f96db7-v9ms2 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 node add --alsologtostderr -v 5
E1129 08:43:47.571865    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:43:47.578145    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:43:47.589610    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:43:47.611161    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:43:47.652572    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:43:47.734070    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:43:47.895823    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:43:48.219311    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:43:48.861302    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:43:50.142602    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:43:52.704521    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-866628 node add --alsologtostderr -v 5: (59.559367825s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-866628 status --alsologtostderr -v 5: (1.1250133s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-866628 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.125152204s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-866628 status --output json --alsologtostderr -v 5: (1.124988191s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 cp testdata/cp-test.txt ha-866628:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 cp ha-866628:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2799304989/001/cp-test_ha-866628.txt
E1129 08:43:57.827465    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 cp ha-866628:/home/docker/cp-test.txt ha-866628-m02:/home/docker/cp-test_ha-866628_ha-866628-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m02 "sudo cat /home/docker/cp-test_ha-866628_ha-866628-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 cp ha-866628:/home/docker/cp-test.txt ha-866628-m03:/home/docker/cp-test_ha-866628_ha-866628-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m03 "sudo cat /home/docker/cp-test_ha-866628_ha-866628-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 cp ha-866628:/home/docker/cp-test.txt ha-866628-m04:/home/docker/cp-test_ha-866628_ha-866628-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m04 "sudo cat /home/docker/cp-test_ha-866628_ha-866628-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 cp testdata/cp-test.txt ha-866628-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 cp ha-866628-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2799304989/001/cp-test_ha-866628-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 cp ha-866628-m02:/home/docker/cp-test.txt ha-866628:/home/docker/cp-test_ha-866628-m02_ha-866628.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628 "sudo cat /home/docker/cp-test_ha-866628-m02_ha-866628.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 cp ha-866628-m02:/home/docker/cp-test.txt ha-866628-m03:/home/docker/cp-test_ha-866628-m02_ha-866628-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m03 "sudo cat /home/docker/cp-test_ha-866628-m02_ha-866628-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 cp ha-866628-m02:/home/docker/cp-test.txt ha-866628-m04:/home/docker/cp-test_ha-866628-m02_ha-866628-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m04 "sudo cat /home/docker/cp-test_ha-866628-m02_ha-866628-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 cp testdata/cp-test.txt ha-866628-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 cp ha-866628-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2799304989/001/cp-test_ha-866628-m03.txt
E1129 08:44:08.069394    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 cp ha-866628-m03:/home/docker/cp-test.txt ha-866628:/home/docker/cp-test_ha-866628-m03_ha-866628.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628 "sudo cat /home/docker/cp-test_ha-866628-m03_ha-866628.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 cp ha-866628-m03:/home/docker/cp-test.txt ha-866628-m02:/home/docker/cp-test_ha-866628-m03_ha-866628-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m02 "sudo cat /home/docker/cp-test_ha-866628-m03_ha-866628-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 cp ha-866628-m03:/home/docker/cp-test.txt ha-866628-m04:/home/docker/cp-test_ha-866628-m03_ha-866628-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m04 "sudo cat /home/docker/cp-test_ha-866628-m03_ha-866628-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 cp testdata/cp-test.txt ha-866628-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 cp ha-866628-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2799304989/001/cp-test_ha-866628-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 cp ha-866628-m04:/home/docker/cp-test.txt ha-866628:/home/docker/cp-test_ha-866628-m04_ha-866628.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628 "sudo cat /home/docker/cp-test_ha-866628-m04_ha-866628.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 cp ha-866628-m04:/home/docker/cp-test.txt ha-866628-m02:/home/docker/cp-test_ha-866628-m04_ha-866628-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m02 "sudo cat /home/docker/cp-test_ha-866628-m04_ha-866628-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 cp ha-866628-m04:/home/docker/cp-test.txt ha-866628-m03:/home/docker/cp-test_ha-866628-m04_ha-866628-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 ssh -n ha-866628-m03 "sudo cat /home/docker/cp-test_ha-866628-m04_ha-866628-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (2.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-866628 node stop m02 --alsologtostderr -v 5: (1.387283974s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-866628 status --alsologtostderr -v 5: exit status 7 (766.896096ms)

                                                
                                                
-- stdout --
	ha-866628
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866628-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-866628-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866628-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:44:18.255303   57955 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:44:18.255426   57955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:44:18.255436   57955 out.go:374] Setting ErrFile to fd 2...
	I1129 08:44:18.255442   57955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:44:18.255699   57955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
	I1129 08:44:18.255909   57955 out.go:368] Setting JSON to false
	I1129 08:44:18.255952   57955 mustload.go:66] Loading cluster: ha-866628
	I1129 08:44:18.256027   57955 notify.go:221] Checking for updates...
	I1129 08:44:18.257001   57955 config.go:182] Loaded profile config "ha-866628": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 08:44:18.257029   57955 status.go:174] checking status of ha-866628 ...
	I1129 08:44:18.257549   57955 cli_runner.go:164] Run: docker container inspect ha-866628 --format={{.State.Status}}
	I1129 08:44:18.278480   57955 status.go:371] ha-866628 host status = "Running" (err=<nil>)
	I1129 08:44:18.278521   57955 host.go:66] Checking if "ha-866628" exists ...
	I1129 08:44:18.278892   57955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-866628
	I1129 08:44:18.305985   57955 host.go:66] Checking if "ha-866628" exists ...
	I1129 08:44:18.306283   57955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 08:44:18.306334   57955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-866628
	I1129 08:44:18.326732   57955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/ha-866628/id_rsa Username:docker}
	I1129 08:44:18.430212   57955 ssh_runner.go:195] Run: systemctl --version
	I1129 08:44:18.438404   57955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 08:44:18.451671   57955 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:44:18.514341   57955 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-29 08:44:18.504486431 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 08:44:18.514943   57955 kubeconfig.go:125] found "ha-866628" server: "https://192.168.49.254:8443"
	I1129 08:44:18.514979   57955 api_server.go:166] Checking apiserver status ...
	I1129 08:44:18.515028   57955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 08:44:18.528976   57955 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1448/cgroup
	I1129 08:44:18.537448   57955 api_server.go:182] apiserver freezer: "2:freezer:/docker/8afadd21d1091114e61d3104865526b16308f05d6d31e90894a3647068a65a32/kubepods/burstable/podf25f8b8c27e0e5a8ceb8833e1c7a8350/52ec8ae34d9370f489d0746bbfce128b6059c607c8b52749f7fc34f7e6d1d89f"
	I1129 08:44:18.537534   57955 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8afadd21d1091114e61d3104865526b16308f05d6d31e90894a3647068a65a32/kubepods/burstable/podf25f8b8c27e0e5a8ceb8833e1c7a8350/52ec8ae34d9370f489d0746bbfce128b6059c607c8b52749f7fc34f7e6d1d89f/freezer.state
	I1129 08:44:18.545639   57955 api_server.go:204] freezer state: "THAWED"
	I1129 08:44:18.545670   57955 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1129 08:44:18.554054   57955 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1129 08:44:18.554085   57955 status.go:463] ha-866628 apiserver status = Running (err=<nil>)
	I1129 08:44:18.554096   57955 status.go:176] ha-866628 status: &{Name:ha-866628 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:44:18.554112   57955 status.go:174] checking status of ha-866628-m02 ...
	I1129 08:44:18.554422   57955 cli_runner.go:164] Run: docker container inspect ha-866628-m02 --format={{.State.Status}}
	I1129 08:44:18.571516   57955 status.go:371] ha-866628-m02 host status = "Stopped" (err=<nil>)
	I1129 08:44:18.571541   57955 status.go:384] host is not running, skipping remaining checks
	I1129 08:44:18.571549   57955 status.go:176] ha-866628-m02 status: &{Name:ha-866628-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:44:18.571573   57955 status.go:174] checking status of ha-866628-m03 ...
	I1129 08:44:18.571901   57955 cli_runner.go:164] Run: docker container inspect ha-866628-m03 --format={{.State.Status}}
	I1129 08:44:18.593126   57955 status.go:371] ha-866628-m03 host status = "Running" (err=<nil>)
	I1129 08:44:18.593153   57955 host.go:66] Checking if "ha-866628-m03" exists ...
	I1129 08:44:18.593455   57955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-866628-m03
	I1129 08:44:18.612065   57955 host.go:66] Checking if "ha-866628-m03" exists ...
	I1129 08:44:18.612499   57955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 08:44:18.612545   57955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-866628-m03
	I1129 08:44:18.630064   57955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/ha-866628-m03/id_rsa Username:docker}
	I1129 08:44:18.738021   57955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 08:44:18.751442   57955 kubeconfig.go:125] found "ha-866628" server: "https://192.168.49.254:8443"
	I1129 08:44:18.751474   57955 api_server.go:166] Checking apiserver status ...
	I1129 08:44:18.751521   57955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 08:44:18.765070   57955 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1386/cgroup
	I1129 08:44:18.774921   57955 api_server.go:182] apiserver freezer: "2:freezer:/docker/32df83124a962861a74185cdb2762a47efb802852a0fdcad026d3d98706b6a3f/kubepods/burstable/pod5c406c2d839780f0357f2c5bba4fde7d/46141eaed791dfbe747e3206cfde9289c8ff5494def2acdc9a832523dec45087"
	I1129 08:44:18.775013   57955 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/32df83124a962861a74185cdb2762a47efb802852a0fdcad026d3d98706b6a3f/kubepods/burstable/pod5c406c2d839780f0357f2c5bba4fde7d/46141eaed791dfbe747e3206cfde9289c8ff5494def2acdc9a832523dec45087/freezer.state
	I1129 08:44:18.784943   57955 api_server.go:204] freezer state: "THAWED"
	I1129 08:44:18.784971   57955 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1129 08:44:18.794611   57955 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1129 08:44:18.794652   57955 status.go:463] ha-866628-m03 apiserver status = Running (err=<nil>)
	I1129 08:44:18.794661   57955 status.go:176] ha-866628-m03 status: &{Name:ha-866628-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:44:18.794681   57955 status.go:174] checking status of ha-866628-m04 ...
	I1129 08:44:18.795017   57955 cli_runner.go:164] Run: docker container inspect ha-866628-m04 --format={{.State.Status}}
	I1129 08:44:18.812289   57955 status.go:371] ha-866628-m04 host status = "Running" (err=<nil>)
	I1129 08:44:18.812328   57955 host.go:66] Checking if "ha-866628-m04" exists ...
	I1129 08:44:18.812671   57955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-866628-m04
	I1129 08:44:18.830994   57955 host.go:66] Checking if "ha-866628-m04" exists ...
	I1129 08:44:18.831304   57955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 08:44:18.831360   57955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-866628-m04
	I1129 08:44:18.849411   57955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/ha-866628-m04/id_rsa Username:docker}
	I1129 08:44:18.958421   57955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 08:44:18.973069   57955 status.go:176] ha-866628-m04 status: &{Name:ha-866628-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (2.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 node start m02 --alsologtostderr -v 5
E1129 08:44:28.550952    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-866628 node start m02 --alsologtostderr -v 5: (11.868545273s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-866628 status --alsologtostderr -v 5: (1.437027464s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.509196595s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (87.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-866628 stop --alsologtostderr -v 5: (26.95310497s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 start --wait true --alsologtostderr -v 5
E1129 08:45:09.512239    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-866628 start --wait true --alsologtostderr -v 5: (59.925824791s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (87.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-866628 node delete m03 --alsologtostderr -v 5: (10.285998628s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 stop --alsologtostderr -v 5
E1129 08:46:31.436233    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:46:41.708900    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-866628 stop --alsologtostderr -v 5: (36.353581209s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-866628 status --alsologtostderr -v 5: exit status 7 (117.863046ms)

                                                
                                                
-- stdout --
	ha-866628
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-866628-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-866628-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:46:50.362358   72608 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:46:50.362480   72608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:46:50.362491   72608 out.go:374] Setting ErrFile to fd 2...
	I1129 08:46:50.362497   72608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:46:50.362736   72608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
	I1129 08:46:50.362922   72608 out.go:368] Setting JSON to false
	I1129 08:46:50.362965   72608 mustload.go:66] Loading cluster: ha-866628
	I1129 08:46:50.363037   72608 notify.go:221] Checking for updates...
	I1129 08:46:50.364324   72608 config.go:182] Loaded profile config "ha-866628": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 08:46:50.364350   72608 status.go:174] checking status of ha-866628 ...
	I1129 08:46:50.365624   72608 cli_runner.go:164] Run: docker container inspect ha-866628 --format={{.State.Status}}
	I1129 08:46:50.382363   72608 status.go:371] ha-866628 host status = "Stopped" (err=<nil>)
	I1129 08:46:50.382384   72608 status.go:384] host is not running, skipping remaining checks
	I1129 08:46:50.382390   72608 status.go:176] ha-866628 status: &{Name:ha-866628 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:46:50.382418   72608 status.go:174] checking status of ha-866628-m02 ...
	I1129 08:46:50.382711   72608 cli_runner.go:164] Run: docker container inspect ha-866628-m02 --format={{.State.Status}}
	I1129 08:46:50.413966   72608 status.go:371] ha-866628-m02 host status = "Stopped" (err=<nil>)
	I1129 08:46:50.413993   72608 status.go:384] host is not running, skipping remaining checks
	I1129 08:46:50.414005   72608 status.go:176] ha-866628-m02 status: &{Name:ha-866628-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:46:50.414029   72608 status.go:174] checking status of ha-866628-m04 ...
	I1129 08:46:50.414361   72608 cli_runner.go:164] Run: docker container inspect ha-866628-m04 --format={{.State.Status}}
	I1129 08:46:50.432321   72608 status.go:371] ha-866628-m04 host status = "Stopped" (err=<nil>)
	I1129 08:46:50.432344   72608 status.go:384] host is not running, skipping remaining checks
	I1129 08:46:50.432351   72608 status.go:176] ha-866628-m04 status: &{Name:ha-866628-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (60.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-866628 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (59.807117332s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (60.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (86.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 node add --control-plane --alsologtostderr -v 5
E1129 08:48:47.572732    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:49:15.277618    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-866628 node add --control-plane --alsologtostderr -v 5: (1m25.641373285s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-866628 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-866628 status --alsologtostderr -v 5: (1.132937215s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (86.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.078755304s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.08s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-304634 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-304634 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (50.080215022s)
--- PASS: TestJSONOutput/start/Command (50.08s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-304634 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-304634 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-304634 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-304634 --output=json --user=testUser: (6.113018106s)
--- PASS: TestJSONOutput/stop/Command (6.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-561460 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-561460 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (92.955014ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cbcab4e6-a3b6-41f2-a7bd-42fe9a861fba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-561460] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e6a57759-01be-44d3-b579-d2b608473718","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22000"}}
	{"specversion":"1.0","id":"a5a9b20f-c0e9-406a-b360-74fd12127ed7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4bf5645c-8913-44be-bcfd-9a408ee4ca51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig"}}
	{"specversion":"1.0","id":"00daf76f-8a98-4255-ad8f-a8a4fb7d875f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube"}}
	{"specversion":"1.0","id":"b4565b01-e463-42bc-80f0-8887b55ca492","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"0068cfc6-430b-4419-a3b1-f7c75bc787aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"74432184-3310-4798-a4b3-97d8432569fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-561460" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-561460
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.47s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-092800 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-092800 --network=: (37.196648426s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-092800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-092800
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-092800: (2.246976736s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.47s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.26s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-982531 --network=bridge
E1129 08:51:41.708835    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-982531 --network=bridge: (34.05388018s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-982531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-982531
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-982531: (2.144205916s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.26s)

                                                
                                    
x
+
TestKicExistingNetwork (36.73s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1129 08:51:47.352259    4137 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1129 08:51:47.370468    4137 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1129 08:51:47.370552    4137 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1129 08:51:47.370569    4137 cli_runner.go:164] Run: docker network inspect existing-network
W1129 08:51:47.389004    4137 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1129 08:51:47.389031    4137 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1129 08:51:47.389045    4137 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1129 08:51:47.389142    4137 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1129 08:51:47.406034    4137 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8664e809540f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:5a:a5:48:89:fb} reservation:<nil>}
I1129 08:51:47.406591    4137 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400189d180}
I1129 08:51:47.406663    4137 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1129 08:51:47.406772    4137 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1129 08:51:47.469745    4137 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-770033 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-770033 --network=existing-network: (34.391674549s)
helpers_test.go:175: Cleaning up "existing-network-770033" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-770033
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-770033: (2.186604451s)
I1129 08:52:24.065569    4137 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.73s)

                                                
                                    
x
+
TestKicCustomSubnet (37.99s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-266520 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-266520 --subnet=192.168.60.0/24: (35.640660597s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-266520 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-266520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-266520
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-266520: (2.317111826s)
--- PASS: TestKicCustomSubnet (37.99s)

                                                
                                    
x
+
TestKicStaticIP (36.28s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-715545 --static-ip=192.168.200.200
E1129 08:53:04.778534    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-715545 --static-ip=192.168.200.200: (33.860494137s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-715545 ip
helpers_test.go:175: Cleaning up "static-ip-715545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-715545
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-715545: (2.257202385s)
--- PASS: TestKicStaticIP (36.28s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (72.66s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-775547 --driver=docker  --container-runtime=containerd
E1129 08:53:47.573054    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-775547 --driver=docker  --container-runtime=containerd: (31.519096267s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-778148 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-778148 --driver=docker  --container-runtime=containerd: (35.288895822s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-775547
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-778148
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-778148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-778148
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-778148: (2.103539412s)
helpers_test.go:175: Cleaning up "first-775547" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-775547
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-775547: (2.147302797s)
--- PASS: TestMinikubeProfile (72.66s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-008340 --memory=3072 --mount-string /tmp/TestMountStartserial1899473682/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-008340 --memory=3072 --mount-string /tmp/TestMountStartserial1899473682/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.524309891s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-008340 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-010424 --memory=3072 --mount-string /tmp/TestMountStartserial1899473682/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-010424 --memory=3072 --mount-string /tmp/TestMountStartserial1899473682/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.735628798s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-010424 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-008340 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-008340 --alsologtostderr -v=5: (1.725851603s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-010424 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-010424
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-010424: (1.319053749s)
--- PASS: TestMountStart/serial/Stop (1.32s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.44s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-010424
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-010424: (6.442581547s)
--- PASS: TestMountStart/serial/RestartStopped (7.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-010424 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (108.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-323088 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1129 08:56:41.708032    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-323088 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m47.63392941s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (108.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-323088 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-323088 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-323088 -- rollout status deployment/busybox: (3.070622186s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-323088 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-323088 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-323088 -- exec busybox-7b57f96db7-shwqc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-323088 -- exec busybox-7b57f96db7-vz8l6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-323088 -- exec busybox-7b57f96db7-shwqc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-323088 -- exec busybox-7b57f96db7-vz8l6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-323088 -- exec busybox-7b57f96db7-shwqc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-323088 -- exec busybox-7b57f96db7-vz8l6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.95s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-323088 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-323088 -- exec busybox-7b57f96db7-shwqc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-323088 -- exec busybox-7b57f96db7-shwqc -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-323088 -- exec busybox-7b57f96db7-vz8l6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-323088 -- exec busybox-7b57f96db7-vz8l6 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-323088 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-323088 -v=5 --alsologtostderr: (57.790827386s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.52s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-323088 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 cp testdata/cp-test.txt multinode-323088:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 ssh -n multinode-323088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 cp multinode-323088:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1225840080/001/cp-test_multinode-323088.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 ssh -n multinode-323088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 cp multinode-323088:/home/docker/cp-test.txt multinode-323088-m02:/home/docker/cp-test_multinode-323088_multinode-323088-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 ssh -n multinode-323088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 ssh -n multinode-323088-m02 "sudo cat /home/docker/cp-test_multinode-323088_multinode-323088-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 cp multinode-323088:/home/docker/cp-test.txt multinode-323088-m03:/home/docker/cp-test_multinode-323088_multinode-323088-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 ssh -n multinode-323088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 ssh -n multinode-323088-m03 "sudo cat /home/docker/cp-test_multinode-323088_multinode-323088-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 cp testdata/cp-test.txt multinode-323088-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 ssh -n multinode-323088-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 cp multinode-323088-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1225840080/001/cp-test_multinode-323088-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 ssh -n multinode-323088-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 cp multinode-323088-m02:/home/docker/cp-test.txt multinode-323088:/home/docker/cp-test_multinode-323088-m02_multinode-323088.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 ssh -n multinode-323088-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 ssh -n multinode-323088 "sudo cat /home/docker/cp-test_multinode-323088-m02_multinode-323088.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 cp multinode-323088-m02:/home/docker/cp-test.txt multinode-323088-m03:/home/docker/cp-test_multinode-323088-m02_multinode-323088-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 ssh -n multinode-323088-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 ssh -n multinode-323088-m03 "sudo cat /home/docker/cp-test_multinode-323088-m02_multinode-323088-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 cp testdata/cp-test.txt multinode-323088-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 ssh -n multinode-323088-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 cp multinode-323088-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1225840080/001/cp-test_multinode-323088-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 ssh -n multinode-323088-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 cp multinode-323088-m03:/home/docker/cp-test.txt multinode-323088:/home/docker/cp-test_multinode-323088-m03_multinode-323088.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 ssh -n multinode-323088-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 ssh -n multinode-323088 "sudo cat /home/docker/cp-test_multinode-323088-m03_multinode-323088.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 cp multinode-323088-m03:/home/docker/cp-test.txt multinode-323088-m02:/home/docker/cp-test_multinode-323088-m03_multinode-323088-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 ssh -n multinode-323088-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 ssh -n multinode-323088-m02 "sudo cat /home/docker/cp-test_multinode-323088-m03_multinode-323088-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-323088 node stop m03: (1.312865491s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-323088 status: exit status 7 (531.7234ms)

                                                
                                                
-- stdout --
	multinode-323088
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-323088-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-323088-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-323088 status --alsologtostderr: exit status 7 (539.249409ms)

                                                
                                                
-- stdout --
	multinode-323088
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-323088-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-323088-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:58:27.699379  125485 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:58:27.699539  125485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:58:27.699571  125485 out.go:374] Setting ErrFile to fd 2...
	I1129 08:58:27.699593  125485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:58:27.699858  125485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
	I1129 08:58:27.700074  125485 out.go:368] Setting JSON to false
	I1129 08:58:27.700137  125485 mustload.go:66] Loading cluster: multinode-323088
	I1129 08:58:27.700233  125485 notify.go:221] Checking for updates...
	I1129 08:58:27.700596  125485 config.go:182] Loaded profile config "multinode-323088": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 08:58:27.700680  125485 status.go:174] checking status of multinode-323088 ...
	I1129 08:58:27.701259  125485 cli_runner.go:164] Run: docker container inspect multinode-323088 --format={{.State.Status}}
	I1129 08:58:27.723507  125485 status.go:371] multinode-323088 host status = "Running" (err=<nil>)
	I1129 08:58:27.723536  125485 host.go:66] Checking if "multinode-323088" exists ...
	I1129 08:58:27.723850  125485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-323088
	I1129 08:58:27.744949  125485 host.go:66] Checking if "multinode-323088" exists ...
	I1129 08:58:27.745248  125485 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 08:58:27.745296  125485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-323088
	I1129 08:58:27.768039  125485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/multinode-323088/id_rsa Username:docker}
	I1129 08:58:27.874201  125485 ssh_runner.go:195] Run: systemctl --version
	I1129 08:58:27.880904  125485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 08:58:27.894175  125485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:58:27.954691  125485 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-29 08:58:27.945498059 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 08:58:27.955227  125485 kubeconfig.go:125] found "multinode-323088" server: "https://192.168.67.2:8443"
	I1129 08:58:27.955261  125485 api_server.go:166] Checking apiserver status ...
	I1129 08:58:27.955309  125485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 08:58:27.968034  125485 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1356/cgroup
	I1129 08:58:27.977073  125485 api_server.go:182] apiserver freezer: "2:freezer:/docker/cce86df7b412667f4dbef96c4246b529cd0ccc9afb6c61e9486bf32736676c71/kubepods/burstable/pod384a4fa67f564e9c43a24aac4074d4fe/2ca794ce6f4b63d513374419569cb46e49044bec8ae8469f7ac7b3195e3d8ef2"
	I1129 08:58:27.977147  125485 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cce86df7b412667f4dbef96c4246b529cd0ccc9afb6c61e9486bf32736676c71/kubepods/burstable/pod384a4fa67f564e9c43a24aac4074d4fe/2ca794ce6f4b63d513374419569cb46e49044bec8ae8469f7ac7b3195e3d8ef2/freezer.state
	I1129 08:58:27.985448  125485 api_server.go:204] freezer state: "THAWED"
	I1129 08:58:27.985485  125485 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1129 08:58:27.993643  125485 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1129 08:58:27.993671  125485 status.go:463] multinode-323088 apiserver status = Running (err=<nil>)
	I1129 08:58:27.993683  125485 status.go:176] multinode-323088 status: &{Name:multinode-323088 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:58:27.993699  125485 status.go:174] checking status of multinode-323088-m02 ...
	I1129 08:58:27.994015  125485 cli_runner.go:164] Run: docker container inspect multinode-323088-m02 --format={{.State.Status}}
	I1129 08:58:28.013636  125485 status.go:371] multinode-323088-m02 host status = "Running" (err=<nil>)
	I1129 08:58:28.013668  125485 host.go:66] Checking if "multinode-323088-m02" exists ...
	I1129 08:58:28.013983  125485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-323088-m02
	I1129 08:58:28.031852  125485 host.go:66] Checking if "multinode-323088-m02" exists ...
	I1129 08:58:28.032206  125485 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 08:58:28.032252  125485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-323088-m02
	I1129 08:58:28.050933  125485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22000-2317/.minikube/machines/multinode-323088-m02/id_rsa Username:docker}
	I1129 08:58:28.154028  125485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 08:58:28.167480  125485 status.go:176] multinode-323088-m02 status: &{Name:multinode-323088-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:58:28.167526  125485 status.go:174] checking status of multinode-323088-m03 ...
	I1129 08:58:28.167889  125485 cli_runner.go:164] Run: docker container inspect multinode-323088-m03 --format={{.State.Status}}
	I1129 08:58:28.186598  125485 status.go:371] multinode-323088-m03 host status = "Stopped" (err=<nil>)
	I1129 08:58:28.186623  125485 status.go:384] host is not running, skipping remaining checks
	I1129 08:58:28.186630  125485 status.go:176] multinode-323088-m03 status: &{Name:multinode-323088-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-323088 node start m03 -v=5 --alsologtostderr: (7.036607933s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-323088
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-323088
E1129 08:58:47.575763    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-323088: (25.193182979s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-323088 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-323088 --wait=true -v=5 --alsologtostderr: (47.902206703s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-323088
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-323088 node delete m03: (5.051763161s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 stop
E1129 09:00:10.640768    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-323088 stop: (23.946800201s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-323088 status: exit status 7 (100.976779ms)

                                                
                                                
-- stdout --
	multinode-323088
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-323088-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-323088 status --alsologtostderr: exit status 7 (101.569655ms)

                                                
                                                
-- stdout --
	multinode-323088
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-323088-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:00:19.116671  134225 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:00:19.116811  134225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:00:19.116824  134225 out.go:374] Setting ErrFile to fd 2...
	I1129 09:00:19.116832  134225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:00:19.117114  134225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
	I1129 09:00:19.117305  134225 out.go:368] Setting JSON to false
	I1129 09:00:19.117340  134225 mustload.go:66] Loading cluster: multinode-323088
	I1129 09:00:19.117503  134225 notify.go:221] Checking for updates...
	I1129 09:00:19.117737  134225 config.go:182] Loaded profile config "multinode-323088": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:00:19.117757  134225 status.go:174] checking status of multinode-323088 ...
	I1129 09:00:19.118265  134225 cli_runner.go:164] Run: docker container inspect multinode-323088 --format={{.State.Status}}
	I1129 09:00:19.140234  134225 status.go:371] multinode-323088 host status = "Stopped" (err=<nil>)
	I1129 09:00:19.140255  134225 status.go:384] host is not running, skipping remaining checks
	I1129 09:00:19.140264  134225 status.go:176] multinode-323088 status: &{Name:multinode-323088 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 09:00:19.140289  134225 status.go:174] checking status of multinode-323088-m02 ...
	I1129 09:00:19.140595  134225 cli_runner.go:164] Run: docker container inspect multinode-323088-m02 --format={{.State.Status}}
	I1129 09:00:19.170534  134225 status.go:371] multinode-323088-m02 host status = "Stopped" (err=<nil>)
	I1129 09:00:19.170556  134225 status.go:384] host is not running, skipping remaining checks
	I1129 09:00:19.170564  134225 status.go:176] multinode-323088-m02 status: &{Name:multinode-323088-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-323088 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-323088 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (56.535864226s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-323088 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-323088
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-323088-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-323088-m02 --driver=docker  --container-runtime=containerd: exit status 14 (101.535807ms)

                                                
                                                
-- stdout --
	* [multinode-323088-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-323088-m02' is duplicated with machine name 'multinode-323088-m02' in profile 'multinode-323088'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-323088-m03 --driver=docker  --container-runtime=containerd
E1129 09:01:41.708801    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-323088-m03 --driver=docker  --container-runtime=containerd: (32.608922594s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-323088
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-323088: exit status 80 (339.528498ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-323088 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-323088-m03 already exists in multinode-323088-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-323088-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-323088-m03: (2.094550176s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.19s)

                                                
                                    
x
+
TestPreload (120.72s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-632491 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-632491 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (58.922882371s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-632491 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-632491 image pull gcr.io/k8s-minikube/busybox: (2.283285889s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-632491
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-632491: (5.915218274s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-632491 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1129 09:03:47.572390    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-632491 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (50.891990147s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-632491 image list
helpers_test.go:175: Cleaning up "test-preload-632491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-632491
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-632491: (2.459130952s)
--- PASS: TestPreload (120.72s)

                                                
                                    
x
+
TestScheduledStopUnix (111.72s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-670951 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-670951 --memory=3072 --driver=docker  --container-runtime=containerd: (34.485259727s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-670951 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1129 09:04:31.140954  150058 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:04:31.141151  150058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:04:31.141163  150058 out.go:374] Setting ErrFile to fd 2...
	I1129 09:04:31.141169  150058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:04:31.141489  150058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
	I1129 09:04:31.141952  150058 out.go:368] Setting JSON to false
	I1129 09:04:31.142091  150058 mustload.go:66] Loading cluster: scheduled-stop-670951
	I1129 09:04:31.142485  150058 config.go:182] Loaded profile config "scheduled-stop-670951": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:04:31.142567  150058 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/scheduled-stop-670951/config.json ...
	I1129 09:04:31.142761  150058 mustload.go:66] Loading cluster: scheduled-stop-670951
	I1129 09:04:31.142886  150058 config.go:182] Loaded profile config "scheduled-stop-670951": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-670951 -n scheduled-stop-670951
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-670951 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1129 09:04:31.630995  150147 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:04:31.632849  150147 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:04:31.633126  150147 out.go:374] Setting ErrFile to fd 2...
	I1129 09:04:31.633180  150147 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:04:31.633639  150147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
	I1129 09:04:31.633934  150147 out.go:368] Setting JSON to false
	I1129 09:04:31.634115  150147 daemonize_unix.go:73] killing process 150074 as it is an old scheduled stop
	I1129 09:04:31.634196  150147 mustload.go:66] Loading cluster: scheduled-stop-670951
	I1129 09:04:31.634768  150147 config.go:182] Loaded profile config "scheduled-stop-670951": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:04:31.634853  150147 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/scheduled-stop-670951/config.json ...
	I1129 09:04:31.635036  150147 mustload.go:66] Loading cluster: scheduled-stop-670951
	I1129 09:04:31.635154  150147 config.go:182] Loaded profile config "scheduled-stop-670951": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1129 09:04:31.641628    4137 retry.go:31] will retry after 113.671µs: open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/scheduled-stop-670951/pid: no such file or directory
I1129 09:04:31.641825    4137 retry.go:31] will retry after 111.671µs: open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/scheduled-stop-670951/pid: no such file or directory
I1129 09:04:31.642414    4137 retry.go:31] will retry after 318.681µs: open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/scheduled-stop-670951/pid: no such file or directory
I1129 09:04:31.642923    4137 retry.go:31] will retry after 335.152µs: open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/scheduled-stop-670951/pid: no such file or directory
I1129 09:04:31.644056    4137 retry.go:31] will retry after 550.482µs: open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/scheduled-stop-670951/pid: no such file or directory
I1129 09:04:31.646342    4137 retry.go:31] will retry after 1.131297ms: open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/scheduled-stop-670951/pid: no such file or directory
I1129 09:04:31.648605    4137 retry.go:31] will retry after 1.199097ms: open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/scheduled-stop-670951/pid: no such file or directory
I1129 09:04:31.650897    4137 retry.go:31] will retry after 1.129839ms: open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/scheduled-stop-670951/pid: no such file or directory
I1129 09:04:31.653149    4137 retry.go:31] will retry after 3.238561ms: open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/scheduled-stop-670951/pid: no such file or directory
I1129 09:04:31.657386    4137 retry.go:31] will retry after 3.658843ms: open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/scheduled-stop-670951/pid: no such file or directory
I1129 09:04:31.661623    4137 retry.go:31] will retry after 7.600289ms: open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/scheduled-stop-670951/pid: no such file or directory
I1129 09:04:31.669897    4137 retry.go:31] will retry after 10.430958ms: open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/scheduled-stop-670951/pid: no such file or directory
I1129 09:04:31.681158    4137 retry.go:31] will retry after 17.848041ms: open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/scheduled-stop-670951/pid: no such file or directory
I1129 09:04:31.699411    4137 retry.go:31] will retry after 28.074142ms: open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/scheduled-stop-670951/pid: no such file or directory
I1129 09:04:31.727847    4137 retry.go:31] will retry after 34.622101ms: open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/scheduled-stop-670951/pid: no such file or directory
I1129 09:04:31.763092    4137 retry.go:31] will retry after 52.733026ms: open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/scheduled-stop-670951/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-670951 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-670951 -n scheduled-stop-670951
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-670951
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-670951 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1129 09:04:57.643131  150825 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:04:57.643368  150825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:04:57.643398  150825 out.go:374] Setting ErrFile to fd 2...
	I1129 09:04:57.643418  150825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:04:57.643687  150825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
	I1129 09:04:57.643987  150825 out.go:368] Setting JSON to false
	I1129 09:04:57.644184  150825 mustload.go:66] Loading cluster: scheduled-stop-670951
	I1129 09:04:57.644668  150825 config.go:182] Loaded profile config "scheduled-stop-670951": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:04:57.644782  150825 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/scheduled-stop-670951/config.json ...
	I1129 09:04:57.645033  150825 mustload.go:66] Loading cluster: scheduled-stop-670951
	I1129 09:04:57.645197  150825 config.go:182] Loaded profile config "scheduled-stop-670951": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-670951
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-670951: exit status 7 (72.249773ms)

                                                
                                                
-- stdout --
	scheduled-stop-670951
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-670951 -n scheduled-stop-670951
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-670951 -n scheduled-stop-670951: exit status 7 (68.011118ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-670951" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-670951
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-670951: (5.538338494s)
--- PASS: TestScheduledStopUnix (111.72s)

                                                
                                    
x
+
TestInsufficientStorage (13.15s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-050032 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-050032 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.55009445s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b902d677-cb84-41f6-b239-df06973e741f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-050032] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7350e70a-ffc0-4d2d-872c-fb3c3cb7f6ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22000"}}
	{"specversion":"1.0","id":"a2ef0ead-1287-4051-af25-275377e77df2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"14f25a34-9c24-4ead-99b1-800f164c48f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig"}}
	{"specversion":"1.0","id":"05e7dd05-1207-4489-a0a3-283960041e66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube"}}
	{"specversion":"1.0","id":"5a84801f-6785-42f4-97bf-87c8e197d69a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4f8c543b-31ca-4ce4-ba30-a2aa5628bcfd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1b8b6ab0-2af6-43b1-8f7e-4c5cdba72085","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"10e7e1bc-1bf1-4f1c-8892-8a8ffdd34a15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f6b356cd-8898-4c8a-a21f-cba2a9e9d666","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"070ed089-df70-4c28-98fe-d5b09e273482","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"cda9f6dd-94d5-4833-a046-e3dfe63ba1bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-050032\" primary control-plane node in \"insufficient-storage-050032\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6171aacc-b12a-42b3-802f-431ad3f451b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ceaf12dd-84d6-4fca-8d10-2c551873d3fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"63b11e2c-28f1-47f6-b67b-92f1cee43dd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-050032 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-050032 --output=json --layout=cluster: exit status 7 (305.945653ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-050032","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-050032","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1129 09:05:59.170699  152659 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-050032" does not appear in /home/jenkins/minikube-integration/22000-2317/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-050032 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-050032 --output=json --layout=cluster: exit status 7 (311.65061ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-050032","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-050032","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1129 09:05:59.482483  152724 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-050032" does not appear in /home/jenkins/minikube-integration/22000-2317/kubeconfig
	E1129 09:05:59.492537  152724 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/insufficient-storage-050032/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-050032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-050032
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-050032: (1.98009983s)
--- PASS: TestInsufficientStorage (13.15s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (324.39s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3281477878 start -p running-upgrade-115889 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3281477878 start -p running-upgrade-115889 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (41.4398468s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-115889 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-115889 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m39.585691707s)
helpers_test.go:175: Cleaning up "running-upgrade-115889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-115889
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-115889: (2.077637245s)
--- PASS: TestRunningBinaryUpgrade (324.39s)

                                                
                                    
x
+
TestKubernetesUpgrade (361.91s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-211277 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-211277 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.74027517s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-211277
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-211277: (1.342919142s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-211277 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-211277 status --format={{.Host}}: exit status 7 (104.491081ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-211277 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-211277 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m55.025503637s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-211277 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-211277 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-211277 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (114.080958ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-211277] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-211277
	    minikube start -p kubernetes-upgrade-211277 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2112772 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-211277 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-211277 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-211277 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (23.596666791s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-211277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-211277
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-211277: (3.845481986s)
--- PASS: TestKubernetesUpgrade (361.91s)

                                                
                                    
x
+
TestMissingContainerUpgrade (134.58s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3030139797 start -p missing-upgrade-765481 --memory=3072 --driver=docker  --container-runtime=containerd
E1129 09:06:41.708751    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3030139797 start -p missing-upgrade-765481 --memory=3072 --driver=docker  --container-runtime=containerd: (1m4.910203504s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-765481
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-765481: (1.008780239s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-765481
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-765481 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-765481 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.85197908s)
helpers_test.go:175: Cleaning up "missing-upgrade-765481" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-765481
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-765481: (2.588691479s)
--- PASS: TestMissingContainerUpgrade (134.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-280152 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-280152 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (105.790526ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-280152] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-280152 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-280152 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.355966918s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-280152 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-280152 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-280152 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (22.909967944s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-280152 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-280152 status -o json: exit status 2 (498.842411ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-280152","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-280152
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-280152: (2.416066966s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-280152 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-280152 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.568027902s)
--- PASS: TestNoKubernetes/serial/Start (7.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22000-2317/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-280152 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-280152 "sudo systemctl is-active --quiet service kubelet": exit status 1 (270.675613ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-280152
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-280152: (1.289238004s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-280152 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-280152 --driver=docker  --container-runtime=containerd: (6.440092469s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-280152 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-280152 "sudo systemctl is-active --quiet service kubelet": exit status 1 (276.380082ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (311.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.73188820 start -p stopped-upgrade-851557 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1129 09:08:47.572244    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.73188820 start -p stopped-upgrade-851557 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (38.3963487s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.73188820 -p stopped-upgrade-851557 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.73188820 -p stopped-upgrade-851557 stop: (1.26919442s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-851557 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1129 09:09:44.780357    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:11:41.708197    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-851557 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m31.503575093s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (311.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-851557
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-851557: (2.543481053s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.54s)

                                                
                                    
x
+
TestPause/serial/Start (62.07s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-256407 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1129 09:13:47.572770    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-256407 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m2.073262956s)
--- PASS: TestPause/serial/Start (62.07s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.71s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-256407 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-256407 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.699646442s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.71s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-256407 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-256407 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-256407 --output=json --layout=cluster: exit status 2 (340.962975ms)

                                                
                                                
-- stdout --
	{"Name":"pause-256407","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-256407","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-256407 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.62s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.82s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-256407 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.82s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-256407 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-256407 --alsologtostderr -v=5: (2.996547051s)
--- PASS: TestPause/serial/DeletePaused (3.00s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (14.108662381s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-256407
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-256407: exit status 1 (16.948164ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-256407: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (14.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-420729 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-420729 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (243.419247ms)

                                                
                                                
-- stdout --
	* [false-420729] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:15:42.058700  203555 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:15:42.058891  203555 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:15:42.058904  203555 out.go:374] Setting ErrFile to fd 2...
	I1129 09:15:42.058910  203555 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:15:42.059219  203555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-2317/.minikube/bin
	I1129 09:15:42.059668  203555 out.go:368] Setting JSON to false
	I1129 09:15:42.060703  203555 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3493,"bootTime":1764404249,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1129 09:15:42.060784  203555 start.go:143] virtualization:  
	I1129 09:15:42.064566  203555 out.go:179] * [false-420729] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 09:15:42.068652  203555 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:15:42.068834  203555 notify.go:221] Checking for updates...
	I1129 09:15:42.075389  203555 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:15:42.078375  203555 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-2317/kubeconfig
	I1129 09:15:42.081686  203555 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-2317/.minikube
	I1129 09:15:42.084849  203555 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 09:15:42.088059  203555 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:15:42.092008  203555 config.go:182] Loaded profile config "running-upgrade-115889": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I1129 09:15:42.092124  203555 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:15:42.129668  203555 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 09:15:42.129813  203555 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:15:42.228517  203555 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 09:15:42.217248823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:15:42.228705  203555 docker.go:319] overlay module found
	I1129 09:15:42.231908  203555 out.go:179] * Using the docker driver based on user configuration
	I1129 09:15:42.234891  203555 start.go:309] selected driver: docker
	I1129 09:15:42.234940  203555 start.go:927] validating driver "docker" against <nil>
	I1129 09:15:42.234955  203555 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:15:42.238846  203555 out.go:203] 
	W1129 09:15:42.241931  203555 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1129 09:15:42.244907  203555 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-420729 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-420729

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-420729

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-420729

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-420729

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-420729

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-420729

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-420729

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-420729

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-420729

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-420729

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-420729

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-420729" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-420729" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22000-2317/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:14:18 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: running-upgrade-115889
contexts:
- context:
cluster: running-upgrade-115889
user: running-upgrade-115889
name: running-upgrade-115889
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-115889
user:
client-certificate: /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/running-upgrade-115889/client.crt
client-key: /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/running-upgrade-115889/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-420729

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-420729"

                                                
                                                
----------------------- debugLogs end: false-420729 [took: 3.399486099s] --------------------------------
helpers_test.go:175: Cleaning up "false-420729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-420729
--- PASS: TestNetworkPlugins/group/false (3.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (63.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-071895 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-071895 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m3.639095276s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (63.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-230403 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-230403 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m6.545373958s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-071895 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-071895 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.46465822s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-071895 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-071895 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-071895 --alsologtostderr -v=3: (12.917198244s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-071895 -n old-k8s-version-071895
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-071895 -n old-k8s-version-071895: exit status 7 (78.498358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-071895 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (53.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-071895 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-071895 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (53.111478327s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-071895 -n old-k8s-version-071895
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (53.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-230403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-230403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.220605382s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-230403 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-230403 --alsologtostderr -v=3
E1129 09:21:41.708292    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-230403 --alsologtostderr -v=3: (12.540363061s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-230403 -n no-preload-230403
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-230403 -n no-preload-230403: exit status 7 (73.941757ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-230403 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-230403 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-230403 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (49.568315501s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-230403 -n no-preload-230403
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gfcjh" [5c62bc83-31ce-4420-a02d-bfa90c072ca5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003899099s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gfcjh" [5c62bc83-31ce-4420-a02d-bfa90c072ca5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003950666s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-071895 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-071895 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-071895 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-071895 --alsologtostderr -v=1: (1.048427191s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-071895 -n old-k8s-version-071895
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-071895 -n old-k8s-version-071895: exit status 2 (416.176613ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-071895 -n old-k8s-version-071895
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-071895 -n old-k8s-version-071895: exit status 2 (435.759771ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-071895 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-071895 -n old-k8s-version-071895
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-071895 -n old-k8s-version-071895
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (80.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-086358 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-086358 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m20.974483662s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (80.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2nkg2" [bc66711e-1b3a-459d-8691-b53a90183fe3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002682714s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2nkg2" [bc66711e-1b3a-459d-8691-b53a90183fe3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003116448s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-230403 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-230403 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-230403 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-230403 --alsologtostderr -v=1: (1.043443298s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-230403 -n no-preload-230403
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-230403 -n no-preload-230403: exit status 2 (424.390725ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-230403 -n no-preload-230403
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-230403 -n no-preload-230403: exit status 2 (416.161637ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-230403 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-230403 -n no-preload-230403
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-230403 -n no-preload-230403
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-528769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-528769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m20.011844294s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-086358 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-086358 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-086358 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-086358 --alsologtostderr -v=3: (12.194436878s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-086358 -n embed-certs-086358
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-086358 -n embed-certs-086358: exit status 7 (73.505826ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-086358 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (54.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-086358 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-086358 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (53.969173038s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-086358 -n embed-certs-086358
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (54.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-528769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-528769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.110049088s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-528769 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-528769 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-528769 --alsologtostderr -v=3: (12.299859959s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-528769 -n default-k8s-diff-port-528769
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-528769 -n default-k8s-diff-port-528769: exit status 7 (75.530706ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-528769 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-528769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-528769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (50.700989933s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-528769 -n default-k8s-diff-port-528769
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d9dpb" [32de8dc3-7c2c-4e86-82b9-3d7962be9de1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003832143s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d9dpb" [32de8dc3-7c2c-4e86-82b9-3d7962be9de1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003199267s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-086358 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-086358 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-086358 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-086358 -n embed-certs-086358
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-086358 -n embed-certs-086358: exit status 2 (370.748533ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-086358 -n embed-certs-086358
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-086358 -n embed-certs-086358: exit status 2 (370.436892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-086358 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-086358 -n embed-certs-086358
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-086358 -n embed-certs-086358
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-287138 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1129 09:25:33.724170    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:25:33.730563    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:25:33.741936    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:25:33.763278    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:25:33.804697    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:25:33.886042    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:25:34.047485    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:25:34.369232    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:25:35.011284    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:25:36.292695    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-287138 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (39.394990364s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-69p9b" [d8233483-7920-440e-87de-5d080ea43f1d] Running
E1129 09:25:38.853983    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003159894s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-69p9b" [d8233483-7920-440e-87de-5d080ea43f1d] Running
E1129 09:25:43.975695    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003623133s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-528769 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-528769 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-528769 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-528769 -n default-k8s-diff-port-528769
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-528769 -n default-k8s-diff-port-528769: exit status 2 (357.9485ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-528769 -n default-k8s-diff-port-528769
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-528769 -n default-k8s-diff-port-528769: exit status 2 (487.623147ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-528769 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-528769 -n default-k8s-diff-port-528769
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-528769 -n default-k8s-diff-port-528769
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-420729 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-420729 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m28.299302794s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-287138 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-287138 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.286913316s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-287138 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-287138 --alsologtostderr -v=3: (3.583387944s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-287138 -n newest-cni-287138
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-287138 -n newest-cni-287138: exit status 7 (108.548154ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-287138 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-287138 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1129 09:26:14.698807    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:26:19.683980    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:26:19.690302    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:26:19.701654    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:26:19.722997    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:26:19.764378    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:26:19.845820    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:26:20.007717    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:26:20.329707    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:26:20.971124    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:26:22.252817    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:26:24.781928    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:26:24.815308    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-287138 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (24.810899726s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-287138 -n newest-cni-287138
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-287138 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-287138 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-287138 -n newest-cni-287138
E1129 09:26:29.936980    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-287138 -n newest-cni-287138: exit status 2 (550.345479ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-287138 -n newest-cni-287138
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-287138 -n newest-cni-287138: exit status 2 (467.43989ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-287138 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-287138 -n newest-cni-287138
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-287138 -n newest-cni-287138
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.76s)
E1129 09:32:24.477031    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/auto-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:24.483453    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/auto-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:24.494926    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/auto-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:24.516438    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/auto-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:24.557828    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/auto-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:24.639302    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/auto-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:24.800890    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/auto-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:25.122321    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/auto-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:25.763903    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/auto-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:27.045729    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/auto-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:29.607118    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/auto-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:34.728850    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/auto-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:44.970222    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/auto-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:57.217679    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/kindnet-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:57.224303    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/kindnet-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:57.235734    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/kindnet-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:57.257253    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/kindnet-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:57.298667    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/kindnet-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:57.380235    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/kindnet-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:57.542019    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/kindnet-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:57.863849    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/kindnet-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:58.505816    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/kindnet-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:32:59.787494    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/kindnet-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (82.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-420729 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1129 09:26:40.179156    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:26:41.708146    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:26:55.660149    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:27:00.660756    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-420729 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m22.18652935s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (82.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-420729 "pgrep -a kubelet"
I1129 09:27:24.200790    4137 config.go:182] Loaded profile config "auto-420729": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-420729 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5pd8j" [7c7226e7-c3ca-4e4a-b5d2-4d37e833d76d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5pd8j" [7c7226e7-c3ca-4e4a-b5d2-4d37e833d76d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003461077s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-420729 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-420729 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-420729 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-420729 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-420729 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m0.939299659s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-z79kw" [97920fb0-83ce-4d2f-bfd9-ef962fd9fa49] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004474948s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-420729 "pgrep -a kubelet"
I1129 09:28:03.733123    4137 config.go:182] Loaded profile config "kindnet-420729": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-420729 replace --force -f testdata/netcat-deployment.yaml
I1129 09:28:04.380004    4137 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dg6dx" [c1eb7ac1-a4a1-4da6-a881-72a913ebe0d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dg6dx" [c1eb7ac1-a4a1-4da6-a881-72a913ebe0d9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004372176s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-420729 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-420729 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-420729 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-420729 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E1129 09:28:47.572037    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/functional-378174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-420729 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m3.190819343s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-ntl58" [3bd44522-e15b-4697-8a58-f217838a2572] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-ntl58" [3bd44522-e15b-4697-8a58-f217838a2572] Running
E1129 09:29:03.543517    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/no-preload-230403/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004238821s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-420729 "pgrep -a kubelet"
I1129 09:29:04.536934    4137 config.go:182] Loaded profile config "calico-420729": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-420729 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qzhzj" [fa92c51a-e13d-472f-af54-194f0fe81ae1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qzhzj" [fa92c51a-e13d-472f-af54-194f0fe81ae1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004353843s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-420729 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-420729 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-420729 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (86.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-420729 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-420729 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m26.133065803s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (86.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-420729 "pgrep -a kubelet"
I1129 09:29:43.684239    4137 config.go:182] Loaded profile config "custom-flannel-420729": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-420729 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x2wrx" [e9caac11-bbb5-4fa2-9835-752909bff004] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-x2wrx" [e9caac11-bbb5-4fa2-9835-752909bff004] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004826464s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-420729 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-420729 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-420729 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-420729 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1129 09:30:33.724138    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:30:39.065288    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/default-k8s-diff-port-528769/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:31:01.424488    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/old-k8s-version-071895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-420729 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m3.745995056s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-420729 "pgrep -a kubelet"
I1129 09:31:07.131285    4137 config.go:182] Loaded profile config "enable-default-cni-420729": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-420729 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-46828" [bec6125c-fe8c-4ca4-b886-73fbfb95c715] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-46828" [bec6125c-fe8c-4ca4-b886-73fbfb95c715] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003719135s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-420729 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-420729 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-420729 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-vn9bm" [9e67d9db-65f9-46c3-84b0-aabc3b9cea91] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004043172s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-420729 "pgrep -a kubelet"
I1129 09:31:33.256967    4137 config.go:182] Loaded profile config "flannel-420729": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-420729 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-74n2r" [e59e76d8-3838-46b9-bb3d-48507b9cd0a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-74n2r" [e59e76d8-3838-46b9-bb3d-48507b9cd0a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.00423497s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (81.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-420729 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1129 09:31:41.708144    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/addons-021028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-420729 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m21.849829639s)
--- PASS: TestNetworkPlugins/group/bridge/Start (81.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-420729 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-420729 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-420729 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-420729 "pgrep -a kubelet"
I1129 09:33:01.953976    4137 config.go:182] Loaded profile config "bridge-420729": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-420729 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rx9j5" [4e1390c8-d00f-4abf-9110-3d5ac7525617] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1129 09:33:02.349712    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/kindnet-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:33:05.452386    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/auto-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-rx9j5" [4e1390c8-d00f-4abf-9110-3d5ac7525617] Running
E1129 09:33:07.471793    4137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/kindnet-420729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003951981s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-420729 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-420729 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-420729 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (30/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-542671 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-542671" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-542671
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-267340" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-267340
--- SKIP: TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-420729 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-420729

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-420729

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-420729

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-420729

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-420729

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-420729

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-420729

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-420729

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-420729

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-420729

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-420729

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-420729" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-420729" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22000-2317/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:14:18 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: running-upgrade-115889
contexts:
- context:
cluster: running-upgrade-115889
user: running-upgrade-115889
name: running-upgrade-115889
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-115889
user:
client-certificate: /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/running-upgrade-115889/client.crt
client-key: /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/running-upgrade-115889/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-420729

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-420729"

                                                
                                                
----------------------- debugLogs end: kubenet-420729 [took: 3.473077175s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-420729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-420729
--- SKIP: TestNetworkPlugins/group/kubenet (3.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-420729 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-420729

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-420729

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-420729

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-420729

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-420729

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-420729

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-420729

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-420729

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-420729

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-420729

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-420729

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-420729" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-420729

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-420729

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-420729

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-420729

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-420729" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-420729" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22000-2317/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:14:18 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: running-upgrade-115889
contexts:
- context:
cluster: running-upgrade-115889
user: running-upgrade-115889
name: running-upgrade-115889
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-115889
user:
client-certificate: /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/running-upgrade-115889/client.crt
client-key: /home/jenkins/minikube-integration/22000-2317/.minikube/profiles/running-upgrade-115889/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-420729

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-420729" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-420729"

                                                
                                                
----------------------- debugLogs end: cilium-420729 [took: 3.82150985s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-420729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-420729
--- SKIP: TestNetworkPlugins/group/cilium (4.04s)

                                                
                                    
Copied to clipboard